00:00:00.001 Started by upstream project "autotest-per-patch" build number 121022 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.021 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.022 The recommended git tool is: git 00:00:00.022 using credential 00000000-0000-0000-0000-000000000002 00:00:00.024 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.041 Fetching changes from the remote Git repository 00:00:00.043 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.071 Using shallow fetch with depth 1 00:00:00.071 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.071 > git --version # timeout=10 00:00:00.093 > git --version # 'git version 2.39.2' 00:00:00.093 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.094 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.094 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.346 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.356 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.368 Checking out Revision 6e1fadd1eee50389429f9abb33dde5face8ca717 (FETCH_HEAD) 00:00:03.368 > git config core.sparsecheckout # timeout=10 00:00:03.380 > git read-tree -mu HEAD # timeout=10 00:00:03.398 > git checkout -f 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=5 00:00:03.418 Commit message: "pool: attach build logs for failed merge builds" 00:00:03.418 > git rev-list --no-walk 6e1fadd1eee50389429f9abb33dde5face8ca717 # timeout=10 00:00:03.518 [Pipeline] Start of Pipeline 00:00:03.531 [Pipeline] library 00:00:03.532 Loading library shm_lib@master 00:00:03.532 Library shm_lib@master is cached. Copying from home. 00:00:03.548 [Pipeline] node 00:00:03.556 Running on CYP10 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.558 [Pipeline] { 00:00:03.567 [Pipeline] catchError 00:00:03.568 [Pipeline] { 00:00:03.580 [Pipeline] wrap 00:00:03.588 [Pipeline] { 00:00:03.595 [Pipeline] stage 00:00:03.596 [Pipeline] { (Prologue) 00:00:03.771 [Pipeline] sh 00:00:04.059 + logger -p user.info -t JENKINS-CI 00:00:04.080 [Pipeline] echo 00:00:04.082 Node: CYP10 00:00:04.090 [Pipeline] sh 00:00:04.392 [Pipeline] setCustomBuildProperty 00:00:04.402 [Pipeline] echo 00:00:04.403 Cleanup processes 00:00:04.407 [Pipeline] sh 00:00:04.690 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.951 2444983 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.965 [Pipeline] sh 00:00:05.251 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.251 ++ grep -v 'sudo pgrep' 00:00:05.251 ++ awk '{print $1}' 00:00:05.251 + sudo kill -9 00:00:05.251 + true 00:00:05.266 [Pipeline] cleanWs 00:00:05.275 [WS-CLEANUP] Deleting project workspace... 00:00:05.275 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.282 [WS-CLEANUP] done 00:00:05.285 [Pipeline] setCustomBuildProperty 00:00:05.299 [Pipeline] sh 00:00:05.588 + sudo git config --global --replace-all safe.directory '*' 00:00:05.671 [Pipeline] nodesByLabel 00:00:05.672 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.682 [Pipeline] httpRequest 00:00:05.688 HttpMethod: GET 00:00:05.689 URL: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:05.692 Sending request to url: http://10.211.164.96/packages/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:05.695 Response Code: HTTP/1.1 200 OK 00:00:05.696 Success: Status code 200 is in the accepted range: 200,404 00:00:05.696 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:06.514 [Pipeline] sh 00:00:06.802 + tar --no-same-owner -xf jbp_6e1fadd1eee50389429f9abb33dde5face8ca717.tar.gz 00:00:06.822 [Pipeline] httpRequest 00:00:06.826 HttpMethod: GET 00:00:06.827 URL: http://10.211.164.96/packages/spdk_68e12c8e2bf75198b8f7de66790423726da49953.tar.gz 00:00:06.827 Sending request to url: http://10.211.164.96/packages/spdk_68e12c8e2bf75198b8f7de66790423726da49953.tar.gz 00:00:06.831 Response Code: HTTP/1.1 200 OK 00:00:06.831 Success: Status code 200 is in the accepted range: 200,404 00:00:06.832 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_68e12c8e2bf75198b8f7de66790423726da49953.tar.gz 00:00:41.311 [Pipeline] sh 00:00:41.642 + tar --no-same-owner -xf spdk_68e12c8e2bf75198b8f7de66790423726da49953.tar.gz 00:00:44.964 [Pipeline] sh 00:00:45.249 + git -C spdk log --oneline -n5 00:00:45.249 68e12c8e2 autorun_post: Check if skipped tests were executed in per-patch 00:00:45.249 3f2c89791 event: switch reactors to poll mode before stopping 00:00:45.249 443e1ea31 setup.sh: emit command line to /dev/kmsg on Linux 00:00:45.249 a1264177c pkgdep/git: Adjust ICE driver to kernel >= 6.8.x 00:00:45.249 af95268b1 pkgdep/git: Adjust QAT driver to kernel >= 6.8.x 00:00:45.263 [Pipeline] } 00:00:45.275 [Pipeline] // stage 00:00:45.282 [Pipeline] stage 00:00:45.283 [Pipeline] { (Prepare) 00:00:45.297 [Pipeline] writeFile 00:00:45.313 [Pipeline] sh 00:00:45.607 + logger -p user.info -t JENKINS-CI 00:00:45.646 [Pipeline] sh 00:00:45.932 + logger -p user.info -t JENKINS-CI 00:00:45.946 [Pipeline] sh 00:00:46.232 + cat autorun-spdk.conf 00:00:46.232 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.232 SPDK_TEST_NVMF=1 00:00:46.232 SPDK_TEST_NVME_CLI=1 00:00:46.232 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.232 SPDK_TEST_NVMF_NICS=e810 00:00:46.232 SPDK_TEST_VFIOUSER=1 00:00:46.232 SPDK_RUN_UBSAN=1 00:00:46.232 NET_TYPE=phy 00:00:46.241 RUN_NIGHTLY=0 00:00:46.246 [Pipeline] readFile 00:00:46.273 [Pipeline] withEnv 00:00:46.276 [Pipeline] { 00:00:46.290 [Pipeline] sh 00:00:46.578 + set -ex 00:00:46.578 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:46.578 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:46.578 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.578 ++ SPDK_TEST_NVMF=1 00:00:46.578 ++ SPDK_TEST_NVME_CLI=1 00:00:46.578 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:46.578 ++ SPDK_TEST_NVMF_NICS=e810 00:00:46.578 ++ SPDK_TEST_VFIOUSER=1 00:00:46.578 ++ SPDK_RUN_UBSAN=1 00:00:46.578 ++ NET_TYPE=phy 00:00:46.578 ++ RUN_NIGHTLY=0 00:00:46.578 + case $SPDK_TEST_NVMF_NICS in 00:00:46.578 + DRIVERS=ice 00:00:46.578 + [[ tcp == \r\d\m\a ]] 00:00:46.578 + [[ -n ice ]] 00:00:46.578 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:46.578 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:46.578 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:46.578 rmmod: ERROR: Module irdma is not currently loaded 00:00:46.578 rmmod: ERROR: Module i40iw is not currently loaded 00:00:46.578 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:46.578 + true 00:00:46.579 + for D in $DRIVERS 00:00:46.579 + sudo modprobe ice 00:00:46.840 + exit 0 00:00:46.850 [Pipeline] } 00:00:46.869 [Pipeline] // withEnv 00:00:46.876 [Pipeline] } 00:00:46.892 [Pipeline] // stage 00:00:46.902 [Pipeline] catchError 00:00:46.904 [Pipeline] { 00:00:46.917 [Pipeline] timeout 00:00:46.917 Timeout set to expire in 40 min 00:00:46.919 [Pipeline] { 00:00:46.936 [Pipeline] stage 00:00:46.938 [Pipeline] { (Tests) 00:00:46.953 [Pipeline] sh 00:00:47.241 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.241 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.241 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.241 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:47.241 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:47.241 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:47.241 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:47.241 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:47.241 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:47.241 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:47.241 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:47.241 + source /etc/os-release 00:00:47.241 ++ NAME='Fedora Linux' 00:00:47.241 ++ VERSION='38 (Cloud Edition)' 00:00:47.241 ++ ID=fedora 00:00:47.241 ++ VERSION_ID=38 00:00:47.241 ++ VERSION_CODENAME= 00:00:47.241 ++ PLATFORM_ID=platform:f38 00:00:47.241 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:47.241 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:47.241 ++ LOGO=fedora-logo-icon 00:00:47.241 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:47.241 ++ HOME_URL=https://fedoraproject.org/ 00:00:47.241 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:47.241 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:47.241 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:47.241 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:47.241 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:47.241 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:47.241 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:47.241 ++ SUPPORT_END=2024-05-14 00:00:47.241 ++ VARIANT='Cloud Edition' 00:00:47.241 ++ VARIANT_ID=cloud 00:00:47.241 + uname -a 00:00:47.241 Linux spdk-cyp-10 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:47.241 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:50.547 Hugepages 00:00:50.547 node hugesize free / total 00:00:50.547 node0 1048576kB 0 / 0 00:00:50.547 node0 2048kB 0 / 0 00:00:50.547 node1 1048576kB 0 / 0 00:00:50.547 node1 2048kB 0 / 0 00:00:50.547 00:00:50.547 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:50.547 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:00:50.547 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:00:50.547 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:00:50.547 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:00:50.547 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:00:50.547 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:00:50.547 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:00:50.547 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:00:50.547 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:00:50.547 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:00:50.547 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:00:50.547 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:00:50.547 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:00:50.547 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:00:50.547 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:00:50.547 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:00:50.547 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:00:50.547 + rm -f /tmp/spdk-ld-path 00:00:50.547 + source autorun-spdk.conf 00:00:50.547 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.547 ++ SPDK_TEST_NVMF=1 00:00:50.547 ++ SPDK_TEST_NVME_CLI=1 00:00:50.547 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.547 ++ SPDK_TEST_NVMF_NICS=e810 00:00:50.547 ++ SPDK_TEST_VFIOUSER=1 00:00:50.547 ++ SPDK_RUN_UBSAN=1 00:00:50.547 ++ NET_TYPE=phy 00:00:50.547 ++ RUN_NIGHTLY=0 00:00:50.547 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:50.547 + [[ -n '' ]] 00:00:50.547 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:50.547 + for M in /var/spdk/build-*-manifest.txt 00:00:50.547 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:50.547 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:50.547 + for M in /var/spdk/build-*-manifest.txt 00:00:50.547 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:50.547 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:50.547 ++ uname 00:00:50.547 + [[ Linux == \L\i\n\u\x ]] 00:00:50.547 + sudo dmesg -T 00:00:50.547 + sudo dmesg --clear 00:00:50.547 + dmesg_pid=2445992 00:00:50.547 + [[ Fedora Linux == FreeBSD ]] 00:00:50.547 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.547 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:50.547 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:50.547 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:50.547 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:50.547 + [[ -x /usr/src/fio-static/fio ]] 00:00:50.547 + export FIO_BIN=/usr/src/fio-static/fio 00:00:50.547 + FIO_BIN=/usr/src/fio-static/fio 00:00:50.547 + sudo dmesg -Tw 00:00:50.547 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:50.547 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:50.547 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:50.547 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.547 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:50.547 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:50.547 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.547 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:50.547 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:50.547 Test configuration: 00:00:50.547 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.547 SPDK_TEST_NVMF=1 00:00:50.547 SPDK_TEST_NVME_CLI=1 00:00:50.547 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:50.547 SPDK_TEST_NVMF_NICS=e810 00:00:50.547 SPDK_TEST_VFIOUSER=1 00:00:50.547 SPDK_RUN_UBSAN=1 00:00:50.547 NET_TYPE=phy 00:00:50.547 RUN_NIGHTLY=0 20:32:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:50.547 20:32:15 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:50.547 20:32:15 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:50.547 20:32:15 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:50.547 20:32:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.547 20:32:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.547 20:32:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.547 20:32:15 -- paths/export.sh@5 -- $ export PATH 00:00:50.547 20:32:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:50.548 20:32:15 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:50.548 20:32:15 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:50.548 20:32:15 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713983535.XXXXXX 00:00:50.548 20:32:15 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713983535.YiPjoY 00:00:50.548 20:32:15 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:50.548 20:32:15 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:50.548 20:32:15 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:50.548 20:32:15 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:50.548 20:32:15 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:50.548 20:32:15 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:50.548 20:32:15 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:50.548 20:32:15 -- common/autotest_common.sh@10 -- $ set +x 00:00:50.810 20:32:15 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:50.810 20:32:15 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:50.810 20:32:15 -- pm/common@17 -- $ local monitor 00:00:50.810 20:32:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.810 20:32:15 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2446027 00:00:50.810 20:32:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.810 20:32:15 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2446029 00:00:50.810 20:32:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.810 20:32:15 -- pm/common@21 -- $ date +%s 00:00:50.810 20:32:15 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2446031 00:00:50.810 20:32:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:50.810 20:32:15 -- pm/common@21 -- $ date +%s 00:00:50.810 20:32:15 -- pm/common@21 -- $ date +%s 00:00:50.810 20:32:15 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2446034 00:00:50.810 20:32:15 -- pm/common@26 -- $ sleep 1 00:00:50.810 20:32:15 -- pm/common@21 -- $ date +%s 00:00:50.810 20:32:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713983535 00:00:50.810 20:32:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713983535 00:00:50.810 20:32:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713983535 00:00:50.810 20:32:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1713983535 00:00:50.810 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713983535_collect-cpu-load.pm.log 00:00:50.810 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713983535_collect-vmstat.pm.log 00:00:50.810 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713983535_collect-bmc-pm.bmc.pm.log 00:00:50.810 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1713983535_collect-cpu-temp.pm.log 00:00:51.755 20:32:16 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:51.755 20:32:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:51.755 20:32:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:51.755 20:32:16 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:51.755 20:32:16 -- spdk/autobuild.sh@16 -- $ date -u 00:00:51.755 Wed Apr 24 06:32:16 PM UTC 2024 00:00:51.755 20:32:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:51.755 v24.05-pre-438-g68e12c8e2 00:00:51.755 20:32:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:51.755 20:32:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:51.755 20:32:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:51.755 20:32:16 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:51.755 20:32:16 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:51.755 20:32:16 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.016 ************************************ 00:00:52.016 START TEST ubsan 00:00:52.016 ************************************ 00:00:52.016 20:32:16 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:52.016 using ubsan 00:00:52.016 00:00:52.016 real 0m0.001s 00:00:52.016 user 0m0.001s 00:00:52.016 sys 0m0.000s 00:00:52.016 20:32:16 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:52.016 20:32:16 -- common/autotest_common.sh@10 -- $ set +x 00:00:52.016 ************************************ 00:00:52.016 END TEST ubsan 00:00:52.016 ************************************ 00:00:52.016 20:32:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:52.016 20:32:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:52.016 20:32:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:52.016 20:32:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:52.016 20:32:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:52.016 20:32:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:52.016 20:32:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:52.016 20:32:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:52.016 20:32:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:52.277 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:52.278 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:52.538 Using 'verbs' RDMA provider 00:01:05.728 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:20.674 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:20.674 Creating mk/config.mk...done. 00:01:20.674 Creating mk/cc.flags.mk...done. 00:01:20.674 Type 'make' to build. 00:01:20.674 20:32:43 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:20.674 20:32:43 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:20.674 20:32:43 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:20.674 20:32:43 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.674 ************************************ 00:01:20.674 START TEST make 00:01:20.674 ************************************ 00:01:20.674 20:32:44 -- common/autotest_common.sh@1111 -- $ make -j144 00:01:20.674 make[1]: Nothing to be done for 'all'. 00:01:21.243 The Meson build system 00:01:21.243 Version: 1.3.1 00:01:21.243 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:21.243 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:21.243 Build type: native build 00:01:21.243 Project name: libvfio-user 00:01:21.243 Project version: 0.0.1 00:01:21.243 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:21.243 C linker for the host machine: cc ld.bfd 2.39-16 00:01:21.243 Host machine cpu family: x86_64 00:01:21.243 Host machine cpu: x86_64 00:01:21.243 Run-time dependency threads found: YES 00:01:21.243 Library dl found: YES 00:01:21.243 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:21.243 Run-time dependency json-c found: YES 0.17 00:01:21.243 Run-time dependency cmocka found: YES 1.1.7 00:01:21.243 Program pytest-3 found: NO 00:01:21.243 Program flake8 found: NO 00:01:21.243 Program misspell-fixer found: NO 00:01:21.243 Program restructuredtext-lint found: NO 00:01:21.243 Program valgrind found: YES (/usr/bin/valgrind) 00:01:21.243 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:21.243 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:21.243 Compiler for C supports arguments -Wwrite-strings: YES 00:01:21.243 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:21.243 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:21.243 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:21.243 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:21.243 Build targets in project: 8 00:01:21.243 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:21.243 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:21.243 00:01:21.243 libvfio-user 0.0.1 00:01:21.243 00:01:21.243 User defined options 00:01:21.243 buildtype : debug 00:01:21.243 default_library: shared 00:01:21.243 libdir : /usr/local/lib 00:01:21.243 00:01:21.243 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:21.503 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:21.762 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:21.762 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:21.762 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:21.762 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:21.762 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:21.762 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:21.762 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:21.762 [8/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:21.762 [9/37] Compiling C object samples/null.p/null.c.o 00:01:21.762 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:21.762 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:21.762 [12/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:21.763 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:21.763 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:21.763 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:21.763 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:21.763 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:21.763 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:21.763 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:21.763 [20/37] Compiling C object samples/server.p/server.c.o 00:01:21.763 [21/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:21.763 [22/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:21.763 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:21.763 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:21.763 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:21.763 [26/37] Compiling C object samples/client.p/client.c.o 00:01:21.763 [27/37] Linking target samples/client 00:01:21.763 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:21.763 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:22.024 [30/37] Linking target test/unit_tests 00:01:22.024 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:22.024 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:22.024 [33/37] Linking target samples/gpio-pci-idio-16 00:01:22.024 [34/37] Linking target samples/null 00:01:22.024 [35/37] Linking target samples/server 00:01:22.024 [36/37] Linking target samples/lspci 00:01:22.024 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:22.024 INFO: autodetecting backend as ninja 00:01:22.024 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:22.285 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:22.548 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:22.548 ninja: no work to do. 00:01:29.136 The Meson build system 00:01:29.136 Version: 1.3.1 00:01:29.136 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:29.136 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:29.136 Build type: native build 00:01:29.136 Program cat found: YES (/usr/bin/cat) 00:01:29.136 Project name: DPDK 00:01:29.136 Project version: 23.11.0 00:01:29.136 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:29.136 C linker for the host machine: cc ld.bfd 2.39-16 00:01:29.136 Host machine cpu family: x86_64 00:01:29.136 Host machine cpu: x86_64 00:01:29.136 Message: ## Building in Developer Mode ## 00:01:29.136 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:29.136 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:29.136 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:29.136 Program python3 found: YES (/usr/bin/python3) 00:01:29.136 Program cat found: YES (/usr/bin/cat) 00:01:29.136 Compiler for C supports arguments -march=native: YES 00:01:29.136 Checking for size of "void *" : 8 00:01:29.136 Checking for size of "void *" : 8 (cached) 00:01:29.136 Library m found: YES 00:01:29.136 Library numa found: YES 00:01:29.136 Has header "numaif.h" : YES 00:01:29.136 Library fdt found: NO 00:01:29.136 Library execinfo found: NO 00:01:29.136 Has header "execinfo.h" : YES 00:01:29.136 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:29.136 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:29.136 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:29.136 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:29.136 Run-time dependency openssl found: YES 3.0.9 00:01:29.136 Run-time dependency libpcap found: YES 1.10.4 00:01:29.136 Has header "pcap.h" with dependency libpcap: YES 00:01:29.136 Compiler for C supports arguments -Wcast-qual: YES 00:01:29.136 Compiler for C supports arguments -Wdeprecated: YES 00:01:29.136 Compiler for C supports arguments -Wformat: YES 00:01:29.136 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:29.136 Compiler for C supports arguments -Wformat-security: NO 00:01:29.136 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.136 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:29.136 Compiler for C supports arguments -Wnested-externs: YES 00:01:29.136 Compiler for C supports arguments -Wold-style-definition: YES 00:01:29.136 Compiler for C supports arguments -Wpointer-arith: YES 00:01:29.136 Compiler for C supports arguments -Wsign-compare: YES 00:01:29.136 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:29.136 Compiler for C supports arguments -Wundef: YES 00:01:29.136 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.136 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:29.136 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:29.136 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.136 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:29.136 Program objdump found: YES (/usr/bin/objdump) 00:01:29.136 Compiler for C supports arguments -mavx512f: YES 00:01:29.136 Checking if "AVX512 checking" compiles: YES 00:01:29.136 Fetching value of define "__SSE4_2__" : 1 00:01:29.136 Fetching value of define "__AES__" : 1 00:01:29.136 Fetching value of define "__AVX__" : 1 00:01:29.136 Fetching value of define "__AVX2__" : 1 00:01:29.136 Fetching value of define "__AVX512BW__" : 1 00:01:29.136 Fetching value of define "__AVX512CD__" : 1 00:01:29.136 Fetching value of define "__AVX512DQ__" : 1 00:01:29.136 Fetching value of define "__AVX512F__" : 1 00:01:29.136 Fetching value of define "__AVX512VL__" : 1 00:01:29.136 Fetching value of define "__PCLMUL__" : 1 00:01:29.136 Fetching value of define "__RDRND__" : 1 00:01:29.136 Fetching value of define "__RDSEED__" : 1 00:01:29.136 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:29.136 Fetching value of define "__znver1__" : (undefined) 00:01:29.136 Fetching value of define "__znver2__" : (undefined) 00:01:29.136 Fetching value of define "__znver3__" : (undefined) 00:01:29.136 Fetching value of define "__znver4__" : (undefined) 00:01:29.136 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:29.136 Message: lib/log: Defining dependency "log" 00:01:29.136 Message: lib/kvargs: Defining dependency "kvargs" 00:01:29.136 Message: lib/telemetry: Defining dependency "telemetry" 00:01:29.136 Checking for function "getentropy" : NO 00:01:29.136 Message: lib/eal: Defining dependency "eal" 00:01:29.136 Message: lib/ring: Defining dependency "ring" 00:01:29.136 Message: lib/rcu: Defining dependency "rcu" 00:01:29.136 Message: lib/mempool: Defining dependency "mempool" 00:01:29.136 Message: lib/mbuf: Defining dependency "mbuf" 00:01:29.136 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:29.136 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:29.136 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:29.136 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:29.136 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:29.136 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:29.136 Compiler for C supports arguments -mpclmul: YES 00:01:29.136 Compiler for C supports arguments -maes: YES 00:01:29.136 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.136 Compiler for C supports arguments -mavx512bw: YES 00:01:29.136 Compiler for C supports arguments -mavx512dq: YES 00:01:29.136 Compiler for C supports arguments -mavx512vl: YES 00:01:29.136 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:29.136 Compiler for C supports arguments -mavx2: YES 00:01:29.136 Compiler for C supports arguments -mavx: YES 00:01:29.136 Message: lib/net: Defining dependency "net" 00:01:29.136 Message: lib/meter: Defining dependency "meter" 00:01:29.136 Message: lib/ethdev: Defining dependency "ethdev" 00:01:29.136 Message: lib/pci: Defining dependency "pci" 00:01:29.136 Message: lib/cmdline: Defining dependency "cmdline" 00:01:29.136 Message: lib/hash: Defining dependency "hash" 00:01:29.136 Message: lib/timer: Defining dependency "timer" 00:01:29.136 Message: lib/compressdev: Defining dependency "compressdev" 00:01:29.136 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:29.136 Message: lib/dmadev: Defining dependency "dmadev" 00:01:29.136 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:29.136 Message: lib/power: Defining dependency "power" 00:01:29.136 Message: lib/reorder: Defining dependency "reorder" 00:01:29.136 Message: lib/security: Defining dependency "security" 00:01:29.136 Has header "linux/userfaultfd.h" : YES 00:01:29.136 Has header "linux/vduse.h" : YES 00:01:29.136 Message: lib/vhost: Defining dependency "vhost" 00:01:29.136 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:29.136 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:29.136 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:29.136 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:29.136 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:29.136 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:29.136 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:29.136 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:29.136 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:29.136 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:29.136 Program doxygen found: YES (/usr/bin/doxygen) 00:01:29.136 Configuring doxy-api-html.conf using configuration 00:01:29.136 Configuring doxy-api-man.conf using configuration 00:01:29.136 Program mandb found: YES (/usr/bin/mandb) 00:01:29.136 Program sphinx-build found: NO 00:01:29.136 Configuring rte_build_config.h using configuration 00:01:29.136 Message: 00:01:29.136 ================= 00:01:29.136 Applications Enabled 00:01:29.136 ================= 00:01:29.136 00:01:29.136 apps: 00:01:29.136 00:01:29.136 00:01:29.136 Message: 00:01:29.136 ================= 00:01:29.136 Libraries Enabled 00:01:29.136 ================= 00:01:29.136 00:01:29.136 libs: 00:01:29.136 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:29.136 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:29.136 cryptodev, dmadev, power, reorder, security, vhost, 00:01:29.136 00:01:29.136 Message: 00:01:29.136 =============== 00:01:29.136 Drivers Enabled 00:01:29.136 =============== 00:01:29.136 00:01:29.136 common: 00:01:29.136 00:01:29.136 bus: 00:01:29.136 pci, vdev, 00:01:29.136 mempool: 00:01:29.136 ring, 00:01:29.136 dma: 00:01:29.137 00:01:29.137 net: 00:01:29.137 00:01:29.137 crypto: 00:01:29.137 00:01:29.137 compress: 00:01:29.137 00:01:29.137 vdpa: 00:01:29.137 00:01:29.137 00:01:29.137 Message: 00:01:29.137 ================= 00:01:29.137 Content Skipped 00:01:29.137 ================= 00:01:29.137 00:01:29.137 apps: 00:01:29.137 dumpcap: explicitly disabled via build config 00:01:29.137 graph: explicitly disabled via build config 00:01:29.137 pdump: explicitly disabled via build config 00:01:29.137 proc-info: explicitly disabled via build config 00:01:29.137 test-acl: explicitly disabled via build config 00:01:29.137 test-bbdev: explicitly disabled via build config 00:01:29.137 test-cmdline: explicitly disabled via build config 00:01:29.137 test-compress-perf: explicitly disabled via build config 00:01:29.137 test-crypto-perf: explicitly disabled via build config 00:01:29.137 test-dma-perf: explicitly disabled via build config 00:01:29.137 test-eventdev: explicitly disabled via build config 00:01:29.137 test-fib: explicitly disabled via build config 00:01:29.137 test-flow-perf: explicitly disabled via build config 00:01:29.137 test-gpudev: explicitly disabled via build config 00:01:29.137 test-mldev: explicitly disabled via build config 00:01:29.137 test-pipeline: explicitly disabled via build config 00:01:29.137 test-pmd: explicitly disabled via build config 00:01:29.137 test-regex: explicitly disabled via build config 00:01:29.137 test-sad: explicitly disabled via build config 00:01:29.137 test-security-perf: explicitly disabled via build config 00:01:29.137 00:01:29.137 libs: 00:01:29.137 metrics: explicitly disabled via build config 00:01:29.137 acl: explicitly disabled via build config 00:01:29.137 bbdev: explicitly disabled via build config 00:01:29.137 bitratestats: explicitly disabled via build config 00:01:29.137 bpf: explicitly disabled via build config 00:01:29.137 cfgfile: explicitly disabled via build config 00:01:29.137 distributor: explicitly disabled via build config 00:01:29.137 efd: explicitly disabled via build config 00:01:29.137 eventdev: explicitly disabled via build config 00:01:29.137 dispatcher: explicitly disabled via build config 00:01:29.137 gpudev: explicitly disabled via build config 00:01:29.137 gro: explicitly disabled via build config 00:01:29.137 gso: explicitly disabled via build config 00:01:29.137 ip_frag: explicitly disabled via build config 00:01:29.137 jobstats: explicitly disabled via build config 00:01:29.137 latencystats: explicitly disabled via build config 00:01:29.137 lpm: explicitly disabled via build config 00:01:29.137 member: explicitly disabled via build config 00:01:29.137 pcapng: explicitly disabled via build config 00:01:29.137 rawdev: explicitly disabled via build config 00:01:29.137 regexdev: explicitly disabled via build config 00:01:29.137 mldev: explicitly disabled via build config 00:01:29.137 rib: explicitly disabled via build config 00:01:29.137 sched: explicitly disabled via build config 00:01:29.137 stack: explicitly disabled via build config 00:01:29.137 ipsec: explicitly disabled via build config 00:01:29.137 pdcp: explicitly disabled via build config 00:01:29.137 fib: explicitly disabled via build config 00:01:29.137 port: explicitly disabled via build config 00:01:29.137 pdump: explicitly disabled via build config 00:01:29.137 table: explicitly disabled via build config 00:01:29.137 pipeline: explicitly disabled via build config 00:01:29.137 graph: explicitly disabled via build config 00:01:29.137 node: explicitly disabled via build config 00:01:29.137 00:01:29.137 drivers: 00:01:29.137 common/cpt: not in enabled drivers build config 00:01:29.137 common/dpaax: not in enabled drivers build config 00:01:29.137 common/iavf: not in enabled drivers build config 00:01:29.137 common/idpf: not in enabled drivers build config 00:01:29.137 common/mvep: not in enabled drivers build config 00:01:29.137 common/octeontx: not in enabled drivers build config 00:01:29.137 bus/auxiliary: not in enabled drivers build config 00:01:29.137 bus/cdx: not in enabled drivers build config 00:01:29.137 bus/dpaa: not in enabled drivers build config 00:01:29.137 bus/fslmc: not in enabled drivers build config 00:01:29.137 bus/ifpga: not in enabled drivers build config 00:01:29.137 bus/platform: not in enabled drivers build config 00:01:29.137 bus/vmbus: not in enabled drivers build config 00:01:29.137 common/cnxk: not in enabled drivers build config 00:01:29.137 common/mlx5: not in enabled drivers build config 00:01:29.137 common/nfp: not in enabled drivers build config 00:01:29.137 common/qat: not in enabled drivers build config 00:01:29.137 common/sfc_efx: not in enabled drivers build config 00:01:29.137 mempool/bucket: not in enabled drivers build config 00:01:29.137 mempool/cnxk: not in enabled drivers build config 00:01:29.137 mempool/dpaa: not in enabled drivers build config 00:01:29.137 mempool/dpaa2: not in enabled drivers build config 00:01:29.137 mempool/octeontx: not in enabled drivers build config 00:01:29.137 mempool/stack: not in enabled drivers build config 00:01:29.137 dma/cnxk: not in enabled drivers build config 00:01:29.137 dma/dpaa: not in enabled drivers build config 00:01:29.137 dma/dpaa2: not in enabled drivers build config 00:01:29.137 dma/hisilicon: not in enabled drivers build config 00:01:29.137 dma/idxd: not in enabled drivers build config 00:01:29.137 dma/ioat: not in enabled drivers build config 00:01:29.137 dma/skeleton: not in enabled drivers build config 00:01:29.137 net/af_packet: not in enabled drivers build config 00:01:29.137 net/af_xdp: not in enabled drivers build config 00:01:29.137 net/ark: not in enabled drivers build config 00:01:29.137 net/atlantic: not in enabled drivers build config 00:01:29.137 net/avp: not in enabled drivers build config 00:01:29.137 net/axgbe: not in enabled drivers build config 00:01:29.137 net/bnx2x: not in enabled drivers build config 00:01:29.137 net/bnxt: not in enabled drivers build config 00:01:29.137 net/bonding: not in enabled drivers build config 00:01:29.137 net/cnxk: not in enabled drivers build config 00:01:29.137 net/cpfl: not in enabled drivers build config 00:01:29.137 net/cxgbe: not in enabled drivers build config 00:01:29.137 net/dpaa: not in enabled drivers build config 00:01:29.137 net/dpaa2: not in enabled drivers build config 00:01:29.137 net/e1000: not in enabled drivers build config 00:01:29.137 net/ena: not in enabled drivers build config 00:01:29.137 net/enetc: not in enabled drivers build config 00:01:29.137 net/enetfec: not in enabled drivers build config 00:01:29.137 net/enic: not in enabled drivers build config 00:01:29.137 net/failsafe: not in enabled drivers build config 00:01:29.137 net/fm10k: not in enabled drivers build config 00:01:29.137 net/gve: not in enabled drivers build config 00:01:29.137 net/hinic: not in enabled drivers build config 00:01:29.137 net/hns3: not in enabled drivers build config 00:01:29.137 net/i40e: not in enabled drivers build config 00:01:29.137 net/iavf: not in enabled drivers build config 00:01:29.137 net/ice: not in enabled drivers build config 00:01:29.137 net/idpf: not in enabled drivers build config 00:01:29.137 net/igc: not in enabled drivers build config 00:01:29.137 net/ionic: not in enabled drivers build config 00:01:29.137 net/ipn3ke: not in enabled drivers build config 00:01:29.137 net/ixgbe: not in enabled drivers build config 00:01:29.137 net/mana: not in enabled drivers build config 00:01:29.137 net/memif: not in enabled drivers build config 00:01:29.137 net/mlx4: not in enabled drivers build config 00:01:29.137 net/mlx5: not in enabled drivers build config 00:01:29.137 net/mvneta: not in enabled drivers build config 00:01:29.137 net/mvpp2: not in enabled drivers build config 00:01:29.137 net/netvsc: not in enabled drivers build config 00:01:29.137 net/nfb: not in enabled drivers build config 00:01:29.137 net/nfp: not in enabled drivers build config 00:01:29.137 net/ngbe: not in enabled drivers build config 00:01:29.137 net/null: not in enabled drivers build config 00:01:29.137 net/octeontx: not in enabled drivers build config 00:01:29.137 net/octeon_ep: not in enabled drivers build config 00:01:29.137 net/pcap: not in enabled drivers build config 00:01:29.137 net/pfe: not in enabled drivers build config 00:01:29.137 net/qede: not in enabled drivers build config 00:01:29.137 net/ring: not in enabled drivers build config 00:01:29.137 net/sfc: not in enabled drivers build config 00:01:29.137 net/softnic: not in enabled drivers build config 00:01:29.137 net/tap: not in enabled drivers build config 00:01:29.137 net/thunderx: not in enabled drivers build config 00:01:29.137 net/txgbe: not in enabled drivers build config 00:01:29.137 net/vdev_netvsc: not in enabled drivers build config 00:01:29.137 net/vhost: not in enabled drivers build config 00:01:29.137 net/virtio: not in enabled drivers build config 00:01:29.137 net/vmxnet3: not in enabled drivers build config 00:01:29.137 raw/*: missing internal dependency, "rawdev" 00:01:29.137 crypto/armv8: not in enabled drivers build config 00:01:29.137 crypto/bcmfs: not in enabled drivers build config 00:01:29.137 crypto/caam_jr: not in enabled drivers build config 00:01:29.137 crypto/ccp: not in enabled drivers build config 00:01:29.137 crypto/cnxk: not in enabled drivers build config 00:01:29.137 crypto/dpaa_sec: not in enabled drivers build config 00:01:29.137 crypto/dpaa2_sec: not in enabled drivers build config 00:01:29.137 crypto/ipsec_mb: not in enabled drivers build config 00:01:29.137 crypto/mlx5: not in enabled drivers build config 00:01:29.137 crypto/mvsam: not in enabled drivers build config 00:01:29.137 crypto/nitrox: not in enabled drivers build config 00:01:29.137 crypto/null: not in enabled drivers build config 00:01:29.137 crypto/octeontx: not in enabled drivers build config 00:01:29.137 crypto/openssl: not in enabled drivers build config 00:01:29.137 crypto/scheduler: not in enabled drivers build config 00:01:29.137 crypto/uadk: not in enabled drivers build config 00:01:29.137 crypto/virtio: not in enabled drivers build config 00:01:29.137 compress/isal: not in enabled drivers build config 00:01:29.137 compress/mlx5: not in enabled drivers build config 00:01:29.137 compress/octeontx: not in enabled drivers build config 00:01:29.137 compress/zlib: not in enabled drivers build config 00:01:29.137 regex/*: missing internal dependency, "regexdev" 00:01:29.137 ml/*: missing internal dependency, "mldev" 00:01:29.137 vdpa/ifc: not in enabled drivers build config 00:01:29.137 vdpa/mlx5: not in enabled drivers build config 00:01:29.137 vdpa/nfp: not in enabled drivers build config 00:01:29.137 vdpa/sfc: not in enabled drivers build config 00:01:29.138 event/*: missing internal dependency, "eventdev" 00:01:29.138 baseband/*: missing internal dependency, "bbdev" 00:01:29.138 gpu/*: missing internal dependency, "gpudev" 00:01:29.138 00:01:29.138 00:01:29.138 Build targets in project: 84 00:01:29.138 00:01:29.138 DPDK 23.11.0 00:01:29.138 00:01:29.138 User defined options 00:01:29.138 buildtype : debug 00:01:29.138 default_library : shared 00:01:29.138 libdir : lib 00:01:29.138 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:29.138 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:29.138 c_link_args : 00:01:29.138 cpu_instruction_set: native 00:01:29.138 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:29.138 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:29.138 enable_docs : false 00:01:29.138 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:29.138 enable_kmods : false 00:01:29.138 tests : false 00:01:29.138 00:01:29.138 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:29.138 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:29.138 [1/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:29.138 [2/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:29.138 [3/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:29.138 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:29.138 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:29.138 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:29.138 [7/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:29.138 [8/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:29.138 [9/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:29.138 [10/264] Linking static target lib/librte_log.a 00:01:29.138 [11/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:29.138 [12/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:29.138 [13/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:29.138 [14/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:29.138 [15/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:29.138 [16/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:29.138 [17/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:29.138 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:29.138 [19/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:29.138 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:29.138 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:29.138 [22/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:29.138 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:29.138 [24/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:29.138 [25/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:29.138 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:29.138 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:29.138 [28/264] Linking static target lib/librte_kvargs.a 00:01:29.138 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:29.138 [30/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:29.138 [31/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:29.138 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:29.138 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:29.138 [34/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:29.398 [35/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:29.398 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:29.398 [37/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:29.398 [38/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:29.398 [39/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:29.398 [40/264] Linking static target lib/librte_pci.a 00:01:29.398 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:29.398 [42/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:29.398 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:29.398 [44/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:29.398 [45/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:29.398 [46/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:29.398 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:29.398 [48/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:29.398 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:29.398 [50/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:29.398 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:29.398 [52/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:29.398 [53/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:29.398 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:29.398 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:29.398 [56/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:29.398 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:29.398 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:29.398 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:29.398 [60/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:29.398 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:29.398 [62/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:29.398 [63/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:29.398 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:29.398 [65/264] Linking static target lib/librte_meter.a 00:01:29.398 [66/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:29.398 [67/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:29.398 [68/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:29.398 [69/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:29.398 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:29.398 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:29.398 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:29.398 [73/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:29.398 [74/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:29.398 [75/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:29.398 [76/264] Linking static target lib/librte_ring.a 00:01:29.398 [77/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:29.398 [78/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:29.398 [79/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:29.398 [80/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:29.398 [81/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:29.398 [82/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:29.398 [83/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:29.398 [84/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:29.398 [85/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:29.398 [86/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:29.398 [87/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:29.398 [88/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:29.398 [89/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:29.398 [90/264] Linking static target lib/librte_timer.a 00:01:29.398 [91/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:29.398 [92/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:29.398 [93/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:29.398 [94/264] Linking static target lib/librte_cmdline.a 00:01:29.398 [95/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:29.398 [96/264] Linking static target lib/librte_dmadev.a 00:01:29.398 [97/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:29.398 [98/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.398 [99/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:29.398 [100/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:29.398 [101/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:29.398 [102/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:29.398 [103/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:29.398 [104/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:29.398 [105/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:29.659 [106/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:29.659 [107/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:29.659 [108/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:29.659 [109/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.659 [110/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:29.659 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:29.659 [112/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:29.659 [113/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:29.659 [114/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:29.659 [115/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:29.659 [116/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:29.659 [117/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:29.659 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:29.659 [119/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:29.659 [120/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:29.659 [121/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:29.659 [122/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:29.659 [123/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:29.659 [124/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:29.659 [125/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:29.659 [126/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:29.659 [127/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.659 [128/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:29.659 [129/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:29.659 [130/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:29.659 [131/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:29.659 [132/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:29.659 [133/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:29.659 [134/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:29.659 [135/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:29.659 [136/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:29.659 [137/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:29.659 [138/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:29.659 [139/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:29.659 [140/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:29.659 [141/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:29.659 [142/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:29.659 [143/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:29.659 [144/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:29.659 [145/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:29.659 [146/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:29.659 [147/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:29.659 [148/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:29.659 [149/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:29.659 [150/264] Linking static target lib/librte_net.a 00:01:29.659 [151/264] Linking static target lib/librte_telemetry.a 00:01:29.659 [152/264] Linking static target lib/librte_mbuf.a 00:01:29.659 [153/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:29.659 [154/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:29.659 [155/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.659 [156/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:29.659 [157/264] Linking static target lib/librte_eal.a 00:01:29.659 [158/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:29.659 [159/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:29.659 [160/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:29.659 [161/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:29.659 [162/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:29.659 [163/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:29.659 [164/264] Linking static target lib/librte_power.a 00:01:29.659 [165/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:29.659 [166/264] Linking static target lib/librte_compressdev.a 00:01:29.659 [167/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.659 [168/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:29.659 [169/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:29.919 [170/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:29.919 [171/264] Linking target lib/librte_log.so.24.0 00:01:29.919 [172/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:29.920 [173/264] Linking static target lib/librte_reorder.a 00:01:29.920 [174/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:29.920 [175/264] Linking static target lib/librte_rcu.a 00:01:29.920 [176/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:29.920 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:29.920 [178/264] Linking static target lib/librte_hash.a 00:01:29.920 [179/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:29.920 [180/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:29.920 [181/264] Linking static target lib/librte_mempool.a 00:01:29.920 [182/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:29.920 [183/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:29.920 [184/264] Linking static target lib/librte_security.a 00:01:29.920 [185/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:29.920 [186/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:29.920 [187/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:29.920 [188/264] Linking static target lib/librte_cryptodev.a 00:01:29.920 [189/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.920 [190/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:29.920 [191/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:29.920 [192/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:29.920 [193/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.920 [194/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:29.920 [195/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:29.920 [196/264] Linking target lib/librte_kvargs.so.24.0 00:01:29.920 [197/264] Linking static target drivers/librte_bus_vdev.a 00:01:29.920 [198/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:29.920 [199/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:29.920 [200/264] Linking static target drivers/librte_bus_pci.a 00:01:29.920 [201/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:30.181 [202/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:30.181 [203/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:30.181 [204/264] Linking static target drivers/librte_mempool_ring.a 00:01:30.181 [205/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.181 [206/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:30.181 [207/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.181 [208/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.441 [209/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.441 [210/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.441 [211/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:30.441 [212/264] Linking target lib/librte_telemetry.so.24.0 00:01:30.441 [213/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:30.441 [214/264] Linking static target lib/librte_ethdev.a 00:01:30.441 [215/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.441 [216/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:30.441 [217/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.441 [218/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.702 [219/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.702 [220/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.702 [221/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.702 [222/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.702 [223/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.645 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:31.645 [225/264] Linking static target lib/librte_vhost.a 00:01:31.906 [226/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.874 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.169 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.089 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.089 [230/264] Linking target lib/librte_eal.so.24.0 00:01:41.350 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:41.350 [232/264] Linking target lib/librte_ring.so.24.0 00:01:41.350 [233/264] Linking target lib/librte_meter.so.24.0 00:01:41.350 [234/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:41.350 [235/264] Linking target lib/librte_pci.so.24.0 00:01:41.350 [236/264] Linking target lib/librte_timer.so.24.0 00:01:41.350 [237/264] Linking target lib/librte_dmadev.so.24.0 00:01:41.612 [238/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:41.612 [239/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:41.612 [240/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:41.612 [241/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:41.612 [242/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:41.612 [243/264] Linking target lib/librte_mempool.so.24.0 00:01:41.612 [244/264] Linking target lib/librte_rcu.so.24.0 00:01:41.612 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:41.612 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:41.612 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:41.875 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:41.875 [249/264] Linking target lib/librte_mbuf.so.24.0 00:01:41.875 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:41.875 [251/264] Linking target lib/librte_compressdev.so.24.0 00:01:42.137 [252/264] Linking target lib/librte_reorder.so.24.0 00:01:42.137 [253/264] Linking target lib/librte_net.so.24.0 00:01:42.137 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:01:42.137 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:42.137 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:42.137 [257/264] Linking target lib/librte_hash.so.24.0 00:01:42.137 [258/264] Linking target lib/librte_security.so.24.0 00:01:42.137 [259/264] Linking target lib/librte_cmdline.so.24.0 00:01:42.137 [260/264] Linking target lib/librte_ethdev.so.24.0 00:01:42.398 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:42.398 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:42.398 [263/264] Linking target lib/librte_power.so.24.0 00:01:42.398 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:42.398 INFO: autodetecting backend as ninja 00:01:42.398 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:01:43.786 CC lib/ut_mock/mock.o 00:01:43.786 CC lib/ut/ut.o 00:01:43.786 CC lib/log/log.o 00:01:43.786 CC lib/log/log_flags.o 00:01:43.786 CC lib/log/log_deprecated.o 00:01:43.786 LIB libspdk_ut_mock.a 00:01:43.786 LIB libspdk_ut.a 00:01:43.786 SO libspdk_ut_mock.so.6.0 00:01:43.786 LIB libspdk_log.a 00:01:43.786 SO libspdk_ut.so.2.0 00:01:43.786 SO libspdk_log.so.7.0 00:01:43.786 SYMLINK libspdk_ut_mock.so 00:01:43.786 SYMLINK libspdk_ut.so 00:01:44.047 SYMLINK libspdk_log.so 00:01:44.309 CC lib/dma/dma.o 00:01:44.309 CXX lib/trace_parser/trace.o 00:01:44.309 CC lib/util/base64.o 00:01:44.309 CC lib/ioat/ioat.o 00:01:44.309 CC lib/util/bit_array.o 00:01:44.309 CC lib/util/cpuset.o 00:01:44.309 CC lib/util/crc32.o 00:01:44.309 CC lib/util/crc16.o 00:01:44.309 CC lib/util/crc32c.o 00:01:44.309 CC lib/util/dif.o 00:01:44.309 CC lib/util/crc32_ieee.o 00:01:44.309 CC lib/util/crc64.o 00:01:44.309 CC lib/util/fd.o 00:01:44.309 CC lib/util/file.o 00:01:44.309 CC lib/util/hexlify.o 00:01:44.309 CC lib/util/iov.o 00:01:44.309 CC lib/util/math.o 00:01:44.309 CC lib/util/pipe.o 00:01:44.309 CC lib/util/strerror_tls.o 00:01:44.309 CC lib/util/string.o 00:01:44.309 CC lib/util/uuid.o 00:01:44.309 CC lib/util/fd_group.o 00:01:44.309 CC lib/util/xor.o 00:01:44.309 CC lib/util/zipf.o 00:01:44.571 CC lib/vfio_user/host/vfio_user_pci.o 00:01:44.571 CC lib/vfio_user/host/vfio_user.o 00:01:44.571 LIB libspdk_dma.a 00:01:44.571 SO libspdk_dma.so.4.0 00:01:44.571 LIB libspdk_ioat.a 00:01:44.571 SYMLINK libspdk_dma.so 00:01:44.571 SO libspdk_ioat.so.7.0 00:01:44.832 LIB libspdk_vfio_user.a 00:01:44.832 SYMLINK libspdk_ioat.so 00:01:44.832 SO libspdk_vfio_user.so.5.0 00:01:44.832 LIB libspdk_util.a 00:01:44.832 SYMLINK libspdk_vfio_user.so 00:01:44.832 SO libspdk_util.so.9.0 00:01:45.093 SYMLINK libspdk_util.so 00:01:45.093 LIB libspdk_trace_parser.a 00:01:45.093 SO libspdk_trace_parser.so.5.0 00:01:45.354 SYMLINK libspdk_trace_parser.so 00:01:45.354 CC lib/idxd/idxd.o 00:01:45.354 CC lib/idxd/idxd_user.o 00:01:45.354 CC lib/vmd/vmd.o 00:01:45.354 CC lib/env_dpdk/env.o 00:01:45.354 CC lib/vmd/led.o 00:01:45.354 CC lib/env_dpdk/memory.o 00:01:45.354 CC lib/env_dpdk/pci.o 00:01:45.354 CC lib/env_dpdk/init.o 00:01:45.354 CC lib/env_dpdk/threads.o 00:01:45.354 CC lib/env_dpdk/pci_ioat.o 00:01:45.354 CC lib/env_dpdk/pci_virtio.o 00:01:45.354 CC lib/env_dpdk/pci_vmd.o 00:01:45.354 CC lib/env_dpdk/pci_idxd.o 00:01:45.354 CC lib/env_dpdk/pci_event.o 00:01:45.354 CC lib/env_dpdk/sigbus_handler.o 00:01:45.354 CC lib/env_dpdk/pci_dpdk.o 00:01:45.354 CC lib/json/json_parse.o 00:01:45.354 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:45.354 CC lib/rdma/common.o 00:01:45.354 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:45.354 CC lib/json/json_util.o 00:01:45.354 CC lib/rdma/rdma_verbs.o 00:01:45.354 CC lib/json/json_write.o 00:01:45.354 CC lib/conf/conf.o 00:01:45.614 LIB libspdk_conf.a 00:01:45.614 SO libspdk_conf.so.6.0 00:01:45.614 SYMLINK libspdk_conf.so 00:01:45.614 LIB libspdk_rdma.a 00:01:45.614 LIB libspdk_json.a 00:01:45.614 SO libspdk_rdma.so.6.0 00:01:45.875 SO libspdk_json.so.6.0 00:01:45.875 SYMLINK libspdk_rdma.so 00:01:45.875 SYMLINK libspdk_json.so 00:01:45.875 LIB libspdk_idxd.a 00:01:45.875 SO libspdk_idxd.so.12.0 00:01:45.875 LIB libspdk_vmd.a 00:01:45.875 SYMLINK libspdk_idxd.so 00:01:45.875 SO libspdk_vmd.so.6.0 00:01:46.136 SYMLINK libspdk_vmd.so 00:01:46.136 CC lib/jsonrpc/jsonrpc_server.o 00:01:46.136 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:46.136 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:46.136 CC lib/jsonrpc/jsonrpc_client.o 00:01:46.397 LIB libspdk_jsonrpc.a 00:01:46.397 SO libspdk_jsonrpc.so.6.0 00:01:46.658 SYMLINK libspdk_jsonrpc.so 00:01:46.658 LIB libspdk_env_dpdk.a 00:01:46.658 SO libspdk_env_dpdk.so.14.0 00:01:46.658 SYMLINK libspdk_env_dpdk.so 00:01:46.920 CC lib/rpc/rpc.o 00:01:47.182 LIB libspdk_rpc.a 00:01:47.182 SO libspdk_rpc.so.6.0 00:01:47.182 SYMLINK libspdk_rpc.so 00:01:47.443 CC lib/notify/notify.o 00:01:47.443 CC lib/notify/notify_rpc.o 00:01:47.443 CC lib/trace/trace.o 00:01:47.443 CC lib/trace/trace_flags.o 00:01:47.443 CC lib/trace/trace_rpc.o 00:01:47.443 CC lib/keyring/keyring.o 00:01:47.443 CC lib/keyring/keyring_rpc.o 00:01:47.704 LIB libspdk_notify.a 00:01:47.704 SO libspdk_notify.so.6.0 00:01:47.704 LIB libspdk_trace.a 00:01:47.704 LIB libspdk_keyring.a 00:01:47.704 SO libspdk_trace.so.10.0 00:01:47.704 SYMLINK libspdk_notify.so 00:01:47.704 SO libspdk_keyring.so.1.0 00:01:47.965 SYMLINK libspdk_trace.so 00:01:47.965 SYMLINK libspdk_keyring.so 00:01:48.226 CC lib/sock/sock.o 00:01:48.226 CC lib/sock/sock_rpc.o 00:01:48.226 CC lib/thread/thread.o 00:01:48.226 CC lib/thread/iobuf.o 00:01:48.486 LIB libspdk_sock.a 00:01:48.486 SO libspdk_sock.so.9.0 00:01:48.746 SYMLINK libspdk_sock.so 00:01:49.007 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:49.007 CC lib/nvme/nvme_ctrlr.o 00:01:49.007 CC lib/nvme/nvme_fabric.o 00:01:49.007 CC lib/nvme/nvme_ns_cmd.o 00:01:49.007 CC lib/nvme/nvme_ns.o 00:01:49.007 CC lib/nvme/nvme_pcie_common.o 00:01:49.007 CC lib/nvme/nvme_pcie.o 00:01:49.007 CC lib/nvme/nvme_qpair.o 00:01:49.007 CC lib/nvme/nvme.o 00:01:49.007 CC lib/nvme/nvme_quirks.o 00:01:49.007 CC lib/nvme/nvme_transport.o 00:01:49.007 CC lib/nvme/nvme_discovery.o 00:01:49.007 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:49.007 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:49.007 CC lib/nvme/nvme_tcp.o 00:01:49.007 CC lib/nvme/nvme_opal.o 00:01:49.007 CC lib/nvme/nvme_io_msg.o 00:01:49.007 CC lib/nvme/nvme_poll_group.o 00:01:49.007 CC lib/nvme/nvme_zns.o 00:01:49.007 CC lib/nvme/nvme_stubs.o 00:01:49.007 CC lib/nvme/nvme_auth.o 00:01:49.007 CC lib/nvme/nvme_cuse.o 00:01:49.007 CC lib/nvme/nvme_vfio_user.o 00:01:49.007 CC lib/nvme/nvme_rdma.o 00:01:49.578 LIB libspdk_thread.a 00:01:49.578 SO libspdk_thread.so.10.0 00:01:49.578 SYMLINK libspdk_thread.so 00:01:49.839 CC lib/accel/accel.o 00:01:49.839 CC lib/accel/accel_rpc.o 00:01:49.839 CC lib/accel/accel_sw.o 00:01:49.839 CC lib/blob/blobstore.o 00:01:49.839 CC lib/init/json_config.o 00:01:49.839 CC lib/blob/request.o 00:01:49.839 CC lib/init/subsystem.o 00:01:49.839 CC lib/blob/zeroes.o 00:01:49.839 CC lib/init/subsystem_rpc.o 00:01:49.839 CC lib/blob/blob_bs_dev.o 00:01:49.839 CC lib/init/rpc.o 00:01:49.839 CC lib/vfu_tgt/tgt_endpoint.o 00:01:49.839 CC lib/virtio/virtio.o 00:01:49.839 CC lib/virtio/virtio_vhost_user.o 00:01:49.839 CC lib/vfu_tgt/tgt_rpc.o 00:01:49.839 CC lib/virtio/virtio_vfio_user.o 00:01:49.839 CC lib/virtio/virtio_pci.o 00:01:50.101 LIB libspdk_init.a 00:01:50.101 SO libspdk_init.so.5.0 00:01:50.101 LIB libspdk_virtio.a 00:01:50.101 LIB libspdk_vfu_tgt.a 00:01:50.101 SYMLINK libspdk_init.so 00:01:50.101 SO libspdk_vfu_tgt.so.3.0 00:01:50.362 SO libspdk_virtio.so.7.0 00:01:50.362 SYMLINK libspdk_vfu_tgt.so 00:01:50.362 SYMLINK libspdk_virtio.so 00:01:50.623 CC lib/event/app.o 00:01:50.623 CC lib/event/reactor.o 00:01:50.623 CC lib/event/log_rpc.o 00:01:50.623 CC lib/event/app_rpc.o 00:01:50.623 CC lib/event/scheduler_static.o 00:01:50.623 LIB libspdk_accel.a 00:01:50.914 LIB libspdk_nvme.a 00:01:50.914 SO libspdk_accel.so.15.0 00:01:50.914 SYMLINK libspdk_accel.so 00:01:50.914 SO libspdk_nvme.so.13.0 00:01:50.914 LIB libspdk_event.a 00:01:50.914 SO libspdk_event.so.13.0 00:01:50.914 SYMLINK libspdk_event.so 00:01:51.175 CC lib/bdev/bdev.o 00:01:51.175 CC lib/bdev/bdev_rpc.o 00:01:51.175 CC lib/bdev/bdev_zone.o 00:01:51.175 CC lib/bdev/part.o 00:01:51.175 CC lib/bdev/scsi_nvme.o 00:01:51.175 SYMLINK libspdk_nvme.so 00:01:52.559 LIB libspdk_blob.a 00:01:52.559 SO libspdk_blob.so.11.0 00:01:52.559 SYMLINK libspdk_blob.so 00:01:52.819 CC lib/lvol/lvol.o 00:01:52.819 CC lib/blobfs/blobfs.o 00:01:52.819 CC lib/blobfs/tree.o 00:01:53.392 LIB libspdk_bdev.a 00:01:53.392 LIB libspdk_blobfs.a 00:01:53.392 SO libspdk_bdev.so.15.0 00:01:53.392 LIB libspdk_lvol.a 00:01:53.392 SO libspdk_blobfs.so.10.0 00:01:53.652 SO libspdk_lvol.so.10.0 00:01:53.652 SYMLINK libspdk_blobfs.so 00:01:53.652 SYMLINK libspdk_bdev.so 00:01:53.652 SYMLINK libspdk_lvol.so 00:01:53.912 CC lib/ftl/ftl_core.o 00:01:53.912 CC lib/ftl/ftl_init.o 00:01:53.912 CC lib/ftl/ftl_layout.o 00:01:53.912 CC lib/ftl/ftl_debug.o 00:01:53.912 CC lib/ftl/ftl_io.o 00:01:53.912 CC lib/ftl/ftl_sb.o 00:01:53.912 CC lib/ftl/ftl_l2p.o 00:01:53.912 CC lib/ftl/ftl_l2p_flat.o 00:01:53.912 CC lib/ftl/ftl_nv_cache.o 00:01:53.912 CC lib/ftl/ftl_band_ops.o 00:01:53.912 CC lib/ftl/ftl_band.o 00:01:53.912 CC lib/ftl/ftl_writer.o 00:01:53.912 CC lib/ftl/ftl_rq.o 00:01:53.912 CC lib/ftl/ftl_reloc.o 00:01:53.912 CC lib/ftl/ftl_l2p_cache.o 00:01:53.912 CC lib/ftl/ftl_p2l.o 00:01:53.912 CC lib/ftl/mngt/ftl_mngt.o 00:01:53.912 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:53.912 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:53.912 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:53.912 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:53.912 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:53.912 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:53.913 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:53.913 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:53.913 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:53.913 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:53.913 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:53.913 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:53.913 CC lib/nvmf/ctrlr.o 00:01:53.913 CC lib/ftl/utils/ftl_conf.o 00:01:53.913 CC lib/ublk/ublk.o 00:01:53.913 CC lib/ftl/utils/ftl_md.o 00:01:53.913 CC lib/nbd/nbd.o 00:01:53.913 CC lib/ublk/ublk_rpc.o 00:01:53.913 CC lib/nvmf/ctrlr_discovery.o 00:01:53.913 CC lib/nvmf/ctrlr_bdev.o 00:01:53.913 CC lib/scsi/dev.o 00:01:53.913 CC lib/ftl/utils/ftl_mempool.o 00:01:53.913 CC lib/nvmf/subsystem.o 00:01:53.913 CC lib/ftl/utils/ftl_bitmap.o 00:01:53.913 CC lib/nbd/nbd_rpc.o 00:01:53.913 CC lib/scsi/lun.o 00:01:53.913 CC lib/nvmf/nvmf.o 00:01:53.913 CC lib/nvmf/nvmf_rpc.o 00:01:53.913 CC lib/ftl/utils/ftl_property.o 00:01:53.913 CC lib/scsi/scsi.o 00:01:53.913 CC lib/scsi/port.o 00:01:53.913 CC lib/nvmf/transport.o 00:01:53.913 CC lib/nvmf/tcp.o 00:01:53.913 CC lib/scsi/scsi_pr.o 00:01:53.913 CC lib/nvmf/vfio_user.o 00:01:53.913 CC lib/nvmf/rdma.o 00:01:53.913 CC lib/scsi/scsi_rpc.o 00:01:53.913 CC lib/scsi/task.o 00:01:53.913 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:53.913 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:53.913 CC lib/scsi/scsi_bdev.o 00:01:53.913 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:53.913 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:53.913 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:53.913 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:53.913 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:53.913 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:53.913 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:53.913 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:53.913 CC lib/ftl/base/ftl_base_dev.o 00:01:53.913 CC lib/ftl/base/ftl_base_bdev.o 00:01:53.913 CC lib/ftl/ftl_trace.o 00:01:54.484 LIB libspdk_scsi.a 00:01:54.484 SO libspdk_scsi.so.9.0 00:01:54.484 LIB libspdk_nbd.a 00:01:54.484 SO libspdk_nbd.so.7.0 00:01:54.484 LIB libspdk_ublk.a 00:01:54.484 SYMLINK libspdk_scsi.so 00:01:54.484 SO libspdk_ublk.so.3.0 00:01:54.484 SYMLINK libspdk_nbd.so 00:01:54.745 SYMLINK libspdk_ublk.so 00:01:54.745 LIB libspdk_ftl.a 00:01:55.007 SO libspdk_ftl.so.9.0 00:01:55.007 CC lib/iscsi/conn.o 00:01:55.007 CC lib/iscsi/init_grp.o 00:01:55.007 CC lib/vhost/vhost.o 00:01:55.007 CC lib/iscsi/iscsi.o 00:01:55.007 CC lib/vhost/vhost_rpc.o 00:01:55.007 CC lib/iscsi/md5.o 00:01:55.007 CC lib/vhost/vhost_scsi.o 00:01:55.007 CC lib/iscsi/param.o 00:01:55.007 CC lib/vhost/vhost_blk.o 00:01:55.007 CC lib/iscsi/portal_grp.o 00:01:55.007 CC lib/vhost/rte_vhost_user.o 00:01:55.007 CC lib/iscsi/tgt_node.o 00:01:55.007 CC lib/iscsi/iscsi_subsystem.o 00:01:55.007 CC lib/iscsi/iscsi_rpc.o 00:01:55.007 CC lib/iscsi/task.o 00:01:55.268 SYMLINK libspdk_ftl.so 00:01:55.839 LIB libspdk_nvmf.a 00:01:55.839 LIB libspdk_vhost.a 00:01:55.840 SO libspdk_nvmf.so.18.0 00:01:55.840 SO libspdk_vhost.so.8.0 00:01:56.100 SYMLINK libspdk_vhost.so 00:01:56.100 LIB libspdk_iscsi.a 00:01:56.100 SYMLINK libspdk_nvmf.so 00:01:56.100 SO libspdk_iscsi.so.8.0 00:01:56.361 SYMLINK libspdk_iscsi.so 00:01:56.934 CC module/env_dpdk/env_dpdk_rpc.o 00:01:56.934 CC module/vfu_device/vfu_virtio.o 00:01:56.934 CC module/vfu_device/vfu_virtio_blk.o 00:01:56.934 CC module/vfu_device/vfu_virtio_scsi.o 00:01:56.934 CC module/vfu_device/vfu_virtio_rpc.o 00:01:56.934 CC module/sock/posix/posix.o 00:01:56.934 LIB libspdk_env_dpdk_rpc.a 00:01:56.934 CC module/accel/error/accel_error.o 00:01:56.934 CC module/accel/error/accel_error_rpc.o 00:01:56.934 CC module/accel/iaa/accel_iaa.o 00:01:56.934 CC module/accel/iaa/accel_iaa_rpc.o 00:01:56.934 CC module/keyring/file/keyring.o 00:01:56.934 CC module/keyring/file/keyring_rpc.o 00:01:56.934 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:56.934 CC module/blob/bdev/blob_bdev.o 00:01:56.934 CC module/accel/dsa/accel_dsa.o 00:01:56.934 CC module/scheduler/gscheduler/gscheduler.o 00:01:56.934 CC module/accel/dsa/accel_dsa_rpc.o 00:01:56.934 CC module/accel/ioat/accel_ioat.o 00:01:56.934 CC module/accel/ioat/accel_ioat_rpc.o 00:01:56.934 SO libspdk_env_dpdk_rpc.so.6.0 00:01:56.934 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:56.934 SYMLINK libspdk_env_dpdk_rpc.so 00:01:57.195 LIB libspdk_scheduler_gscheduler.a 00:01:57.195 LIB libspdk_keyring_file.a 00:01:57.195 LIB libspdk_accel_error.a 00:01:57.195 SO libspdk_keyring_file.so.1.0 00:01:57.195 LIB libspdk_scheduler_dpdk_governor.a 00:01:57.195 LIB libspdk_accel_ioat.a 00:01:57.195 SO libspdk_scheduler_gscheduler.so.4.0 00:01:57.195 LIB libspdk_scheduler_dynamic.a 00:01:57.195 LIB libspdk_accel_iaa.a 00:01:57.195 SO libspdk_accel_error.so.2.0 00:01:57.195 LIB libspdk_vfu_device.a 00:01:57.195 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:57.195 SO libspdk_scheduler_dynamic.so.4.0 00:01:57.195 SO libspdk_accel_ioat.so.6.0 00:01:57.195 LIB libspdk_blob_bdev.a 00:01:57.195 LIB libspdk_accel_dsa.a 00:01:57.195 SO libspdk_accel_iaa.so.3.0 00:01:57.195 SYMLINK libspdk_keyring_file.so 00:01:57.195 SYMLINK libspdk_scheduler_gscheduler.so 00:01:57.195 SO libspdk_blob_bdev.so.11.0 00:01:57.195 SO libspdk_vfu_device.so.3.0 00:01:57.195 SYMLINK libspdk_accel_error.so 00:01:57.195 SYMLINK libspdk_accel_ioat.so 00:01:57.195 SYMLINK libspdk_scheduler_dynamic.so 00:01:57.195 SO libspdk_accel_dsa.so.5.0 00:01:57.195 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:57.195 SYMLINK libspdk_accel_iaa.so 00:01:57.195 SYMLINK libspdk_blob_bdev.so 00:01:57.195 SYMLINK libspdk_vfu_device.so 00:01:57.457 SYMLINK libspdk_accel_dsa.so 00:01:57.457 LIB libspdk_sock_posix.a 00:01:57.717 SO libspdk_sock_posix.so.6.0 00:01:57.717 SYMLINK libspdk_sock_posix.so 00:01:57.717 CC module/bdev/null/bdev_null.o 00:01:57.717 CC module/bdev/null/bdev_null_rpc.o 00:01:57.717 CC module/bdev/gpt/gpt.o 00:01:57.717 CC module/bdev/gpt/vbdev_gpt.o 00:01:57.717 CC module/bdev/aio/bdev_aio.o 00:01:57.717 CC module/bdev/delay/vbdev_delay.o 00:01:57.717 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:57.717 CC module/bdev/aio/bdev_aio_rpc.o 00:01:57.717 CC module/bdev/passthru/vbdev_passthru.o 00:01:57.717 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:57.717 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:57.717 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:57.717 CC module/bdev/malloc/bdev_malloc.o 00:01:57.717 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:57.717 CC module/bdev/lvol/vbdev_lvol.o 00:01:57.977 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:57.977 CC module/bdev/raid/bdev_raid.o 00:01:57.977 CC module/bdev/split/vbdev_split.o 00:01:57.977 CC module/bdev/error/vbdev_error.o 00:01:57.977 CC module/bdev/raid/bdev_raid_rpc.o 00:01:57.977 CC module/bdev/split/vbdev_split_rpc.o 00:01:57.977 CC module/bdev/error/vbdev_error_rpc.o 00:01:57.977 CC module/bdev/raid/bdev_raid_sb.o 00:01:57.977 CC module/bdev/raid/raid0.o 00:01:57.977 CC module/bdev/nvme/bdev_nvme.o 00:01:57.977 CC module/bdev/raid/raid1.o 00:01:57.977 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:57.977 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:57.977 CC module/blobfs/bdev/blobfs_bdev.o 00:01:57.977 CC module/bdev/raid/concat.o 00:01:57.977 CC module/bdev/nvme/nvme_rpc.o 00:01:57.977 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:57.977 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:57.977 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:57.977 CC module/bdev/nvme/bdev_mdns_client.o 00:01:57.977 CC module/bdev/nvme/vbdev_opal.o 00:01:57.977 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:57.977 CC module/bdev/iscsi/bdev_iscsi.o 00:01:57.977 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:57.977 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:57.977 CC module/bdev/ftl/bdev_ftl.o 00:01:57.977 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:57.977 LIB libspdk_bdev_split.a 00:01:58.238 LIB libspdk_blobfs_bdev.a 00:01:58.238 LIB libspdk_bdev_passthru.a 00:01:58.238 SO libspdk_bdev_split.so.6.0 00:01:58.238 LIB libspdk_bdev_null.a 00:01:58.238 SO libspdk_blobfs_bdev.so.6.0 00:01:58.238 LIB libspdk_bdev_ftl.a 00:01:58.238 SO libspdk_bdev_passthru.so.6.0 00:01:58.238 LIB libspdk_bdev_gpt.a 00:01:58.238 LIB libspdk_bdev_error.a 00:01:58.238 LIB libspdk_bdev_malloc.a 00:01:58.238 SO libspdk_bdev_null.so.6.0 00:01:58.238 SO libspdk_bdev_ftl.so.6.0 00:01:58.238 SO libspdk_bdev_error.so.6.0 00:01:58.238 LIB libspdk_bdev_zone_block.a 00:01:58.238 SO libspdk_bdev_gpt.so.6.0 00:01:58.238 SYMLINK libspdk_bdev_split.so 00:01:58.238 LIB libspdk_bdev_delay.a 00:01:58.238 SYMLINK libspdk_blobfs_bdev.so 00:01:58.238 LIB libspdk_bdev_aio.a 00:01:58.238 SYMLINK libspdk_bdev_passthru.so 00:01:58.238 SO libspdk_bdev_malloc.so.6.0 00:01:58.238 SO libspdk_bdev_delay.so.6.0 00:01:58.238 SO libspdk_bdev_zone_block.so.6.0 00:01:58.238 SYMLINK libspdk_bdev_null.so 00:01:58.238 SYMLINK libspdk_bdev_gpt.so 00:01:58.238 SO libspdk_bdev_aio.so.6.0 00:01:58.238 SYMLINK libspdk_bdev_error.so 00:01:58.238 SYMLINK libspdk_bdev_ftl.so 00:01:58.238 LIB libspdk_bdev_iscsi.a 00:01:58.238 SYMLINK libspdk_bdev_malloc.so 00:01:58.238 SYMLINK libspdk_bdev_delay.so 00:01:58.238 SO libspdk_bdev_iscsi.so.6.0 00:01:58.238 SYMLINK libspdk_bdev_zone_block.so 00:01:58.238 LIB libspdk_bdev_virtio.a 00:01:58.238 LIB libspdk_bdev_lvol.a 00:01:58.238 SYMLINK libspdk_bdev_aio.so 00:01:58.499 SO libspdk_bdev_virtio.so.6.0 00:01:58.499 SO libspdk_bdev_lvol.so.6.0 00:01:58.499 SYMLINK libspdk_bdev_iscsi.so 00:01:58.499 SYMLINK libspdk_bdev_lvol.so 00:01:58.499 SYMLINK libspdk_bdev_virtio.so 00:01:58.759 LIB libspdk_bdev_raid.a 00:01:58.759 SO libspdk_bdev_raid.so.6.0 00:01:58.759 SYMLINK libspdk_bdev_raid.so 00:01:59.700 LIB libspdk_bdev_nvme.a 00:01:59.961 SO libspdk_bdev_nvme.so.7.0 00:01:59.961 SYMLINK libspdk_bdev_nvme.so 00:02:00.534 CC module/event/subsystems/vmd/vmd.o 00:02:00.534 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:00.534 CC module/event/subsystems/iobuf/iobuf.o 00:02:00.534 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:00.534 CC module/event/subsystems/sock/sock.o 00:02:00.534 CC module/event/subsystems/scheduler/scheduler.o 00:02:00.534 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:00.534 CC module/event/subsystems/keyring/keyring.o 00:02:00.534 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:00.796 LIB libspdk_event_sock.a 00:02:00.796 LIB libspdk_event_vmd.a 00:02:00.796 LIB libspdk_event_keyring.a 00:02:00.796 LIB libspdk_event_vfu_tgt.a 00:02:00.796 LIB libspdk_event_iobuf.a 00:02:00.796 LIB libspdk_event_scheduler.a 00:02:00.796 LIB libspdk_event_vhost_blk.a 00:02:00.796 SO libspdk_event_sock.so.5.0 00:02:00.796 SO libspdk_event_vmd.so.6.0 00:02:00.796 SO libspdk_event_vfu_tgt.so.3.0 00:02:00.796 SO libspdk_event_iobuf.so.3.0 00:02:00.796 SO libspdk_event_keyring.so.1.0 00:02:00.796 SO libspdk_event_vhost_blk.so.3.0 00:02:00.796 SO libspdk_event_scheduler.so.4.0 00:02:00.796 SYMLINK libspdk_event_vmd.so 00:02:00.796 SYMLINK libspdk_event_sock.so 00:02:00.796 SYMLINK libspdk_event_vfu_tgt.so 00:02:01.057 SYMLINK libspdk_event_iobuf.so 00:02:01.057 SYMLINK libspdk_event_keyring.so 00:02:01.057 SYMLINK libspdk_event_scheduler.so 00:02:01.057 SYMLINK libspdk_event_vhost_blk.so 00:02:01.317 CC module/event/subsystems/accel/accel.o 00:02:01.317 LIB libspdk_event_accel.a 00:02:01.317 SO libspdk_event_accel.so.6.0 00:02:01.592 SYMLINK libspdk_event_accel.so 00:02:01.854 CC module/event/subsystems/bdev/bdev.o 00:02:02.116 LIB libspdk_event_bdev.a 00:02:02.116 SO libspdk_event_bdev.so.6.0 00:02:02.116 SYMLINK libspdk_event_bdev.so 00:02:02.377 CC module/event/subsystems/ublk/ublk.o 00:02:02.377 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:02.377 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:02.377 CC module/event/subsystems/scsi/scsi.o 00:02:02.377 CC module/event/subsystems/nbd/nbd.o 00:02:02.639 LIB libspdk_event_ublk.a 00:02:02.639 LIB libspdk_event_nbd.a 00:02:02.639 SO libspdk_event_ublk.so.3.0 00:02:02.639 LIB libspdk_event_scsi.a 00:02:02.639 SO libspdk_event_nbd.so.6.0 00:02:02.639 SO libspdk_event_scsi.so.6.0 00:02:02.639 LIB libspdk_event_nvmf.a 00:02:02.639 SYMLINK libspdk_event_ublk.so 00:02:02.639 SO libspdk_event_nvmf.so.6.0 00:02:02.639 SYMLINK libspdk_event_nbd.so 00:02:02.639 SYMLINK libspdk_event_scsi.so 00:02:02.900 SYMLINK libspdk_event_nvmf.so 00:02:03.162 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:03.162 CC module/event/subsystems/iscsi/iscsi.o 00:02:03.162 LIB libspdk_event_vhost_scsi.a 00:02:03.162 LIB libspdk_event_iscsi.a 00:02:03.423 SO libspdk_event_vhost_scsi.so.3.0 00:02:03.423 SO libspdk_event_iscsi.so.6.0 00:02:03.423 SYMLINK libspdk_event_vhost_scsi.so 00:02:03.423 SYMLINK libspdk_event_iscsi.so 00:02:03.683 SO libspdk.so.6.0 00:02:03.683 SYMLINK libspdk.so 00:02:03.942 CXX app/trace/trace.o 00:02:03.942 CC app/spdk_nvme_discover/discovery_aer.o 00:02:03.942 CC app/spdk_nvme_perf/perf.o 00:02:03.942 CC app/spdk_lspci/spdk_lspci.o 00:02:03.942 CC app/trace_record/trace_record.o 00:02:03.942 CC app/spdk_top/spdk_top.o 00:02:03.942 TEST_HEADER include/spdk/accel.h 00:02:03.942 TEST_HEADER include/spdk/barrier.h 00:02:03.942 TEST_HEADER include/spdk/base64.h 00:02:03.942 TEST_HEADER include/spdk/accel_module.h 00:02:03.942 TEST_HEADER include/spdk/bdev_module.h 00:02:03.942 TEST_HEADER include/spdk/assert.h 00:02:03.942 CC app/iscsi_tgt/iscsi_tgt.o 00:02:03.942 TEST_HEADER include/spdk/bdev_zone.h 00:02:03.942 CC test/rpc_client/rpc_client_test.o 00:02:03.942 TEST_HEADER include/spdk/bit_array.h 00:02:03.942 TEST_HEADER include/spdk/bdev.h 00:02:03.942 CC app/vhost/vhost.o 00:02:03.942 CC app/spdk_nvme_identify/identify.o 00:02:03.942 TEST_HEADER include/spdk/blobfs.h 00:02:03.942 TEST_HEADER include/spdk/bit_pool.h 00:02:03.942 TEST_HEADER include/spdk/conf.h 00:02:03.942 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:03.942 CC app/nvmf_tgt/nvmf_main.o 00:02:04.201 TEST_HEADER include/spdk/blob_bdev.h 00:02:04.201 TEST_HEADER include/spdk/cpuset.h 00:02:04.201 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:04.201 TEST_HEADER include/spdk/blob.h 00:02:04.201 TEST_HEADER include/spdk/config.h 00:02:04.201 TEST_HEADER include/spdk/crc16.h 00:02:04.201 TEST_HEADER include/spdk/crc64.h 00:02:04.201 TEST_HEADER include/spdk/dma.h 00:02:04.201 TEST_HEADER include/spdk/env_dpdk.h 00:02:04.201 CC app/spdk_dd/spdk_dd.o 00:02:04.201 CC app/spdk_tgt/spdk_tgt.o 00:02:04.201 TEST_HEADER include/spdk/crc32.h 00:02:04.201 TEST_HEADER include/spdk/dif.h 00:02:04.201 TEST_HEADER include/spdk/endian.h 00:02:04.201 TEST_HEADER include/spdk/event.h 00:02:04.201 TEST_HEADER include/spdk/fd_group.h 00:02:04.201 TEST_HEADER include/spdk/fd.h 00:02:04.201 TEST_HEADER include/spdk/ftl.h 00:02:04.202 TEST_HEADER include/spdk/env.h 00:02:04.202 TEST_HEADER include/spdk/file.h 00:02:04.202 TEST_HEADER include/spdk/gpt_spec.h 00:02:04.202 TEST_HEADER include/spdk/histogram_data.h 00:02:04.202 TEST_HEADER include/spdk/hexlify.h 00:02:04.202 TEST_HEADER include/spdk/idxd_spec.h 00:02:04.202 TEST_HEADER include/spdk/init.h 00:02:04.202 TEST_HEADER include/spdk/idxd.h 00:02:04.202 TEST_HEADER include/spdk/iscsi_spec.h 00:02:04.202 TEST_HEADER include/spdk/json.h 00:02:04.202 TEST_HEADER include/spdk/ioat.h 00:02:04.202 TEST_HEADER include/spdk/keyring.h 00:02:04.202 TEST_HEADER include/spdk/ioat_spec.h 00:02:04.202 TEST_HEADER include/spdk/jsonrpc.h 00:02:04.202 TEST_HEADER include/spdk/log.h 00:02:04.202 TEST_HEADER include/spdk/keyring_module.h 00:02:04.202 TEST_HEADER include/spdk/lvol.h 00:02:04.202 TEST_HEADER include/spdk/memory.h 00:02:04.202 TEST_HEADER include/spdk/mmio.h 00:02:04.202 CC test/env/vtophys/vtophys.o 00:02:04.202 TEST_HEADER include/spdk/likely.h 00:02:04.202 TEST_HEADER include/spdk/nbd.h 00:02:04.202 CC test/app/histogram_perf/histogram_perf.o 00:02:04.202 TEST_HEADER include/spdk/nvme.h 00:02:04.202 TEST_HEADER include/spdk/nvme_intel.h 00:02:04.202 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:04.202 TEST_HEADER include/spdk/notify.h 00:02:04.202 TEST_HEADER include/spdk/nvme_zns.h 00:02:04.202 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:04.202 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:04.202 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:04.202 CC test/thread/poller_perf/poller_perf.o 00:02:04.202 TEST_HEADER include/spdk/nvme_spec.h 00:02:04.202 CC test/nvme/e2edp/nvme_dp.o 00:02:04.202 LINK spdk_lspci 00:02:04.202 TEST_HEADER include/spdk/nvmf_transport.h 00:02:04.202 TEST_HEADER include/spdk/opal.h 00:02:04.202 TEST_HEADER include/spdk/nvmf.h 00:02:04.202 TEST_HEADER include/spdk/pci_ids.h 00:02:04.202 TEST_HEADER include/spdk/nvmf_spec.h 00:02:04.202 TEST_HEADER include/spdk/opal_spec.h 00:02:04.202 TEST_HEADER include/spdk/pipe.h 00:02:04.202 CC test/accel/dif/dif.o 00:02:04.202 TEST_HEADER include/spdk/queue.h 00:02:04.202 TEST_HEADER include/spdk/scsi.h 00:02:04.202 TEST_HEADER include/spdk/reduce.h 00:02:04.202 TEST_HEADER include/spdk/scsi_spec.h 00:02:04.202 CC test/nvme/simple_copy/simple_copy.o 00:02:04.202 CC test/nvme/err_injection/err_injection.o 00:02:04.202 TEST_HEADER include/spdk/rpc.h 00:02:04.202 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:04.202 CC test/nvme/sgl/sgl.o 00:02:04.202 CC test/event/app_repeat/app_repeat.o 00:02:04.202 TEST_HEADER include/spdk/scheduler.h 00:02:04.202 TEST_HEADER include/spdk/sock.h 00:02:04.202 CC test/env/memory/memory_ut.o 00:02:04.202 CC examples/util/zipf/zipf.o 00:02:04.202 TEST_HEADER include/spdk/stdinc.h 00:02:04.202 CC test/event/event_perf/event_perf.o 00:02:04.202 CC examples/vmd/led/led.o 00:02:04.202 TEST_HEADER include/spdk/trace.h 00:02:04.202 TEST_HEADER include/spdk/string.h 00:02:04.202 CC examples/blob/hello_world/hello_blob.o 00:02:04.202 CC examples/sock/hello_world/hello_sock.o 00:02:04.202 TEST_HEADER include/spdk/tree.h 00:02:04.202 TEST_HEADER include/spdk/ublk.h 00:02:04.202 TEST_HEADER include/spdk/thread.h 00:02:04.202 TEST_HEADER include/spdk/trace_parser.h 00:02:04.202 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:04.202 CC examples/nvme/hello_world/hello_world.o 00:02:04.202 CC examples/ioat/perf/perf.o 00:02:04.202 TEST_HEADER include/spdk/version.h 00:02:04.202 TEST_HEADER include/spdk/uuid.h 00:02:04.202 TEST_HEADER include/spdk/util.h 00:02:04.202 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:04.202 CC examples/idxd/perf/perf.o 00:02:04.202 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:04.202 CC test/env/pci/pci_ut.o 00:02:04.202 TEST_HEADER include/spdk/vhost.h 00:02:04.202 CC test/app/jsoncat/jsoncat.o 00:02:04.202 TEST_HEADER include/spdk/xor.h 00:02:04.202 CC examples/bdev/bdevperf/bdevperf.o 00:02:04.202 CC test/nvme/connect_stress/connect_stress.o 00:02:04.202 CC examples/nvme/abort/abort.o 00:02:04.202 TEST_HEADER include/spdk/zipf.h 00:02:04.202 CC examples/nvme/arbitration/arbitration.o 00:02:04.202 CC test/nvme/cuse/cuse.o 00:02:04.202 CXX test/cpp_headers/accel.o 00:02:04.202 CC test/bdev/bdevio/bdevio.o 00:02:04.202 TEST_HEADER include/spdk/vmd.h 00:02:04.202 CC test/event/reactor_perf/reactor_perf.o 00:02:04.202 CC app/fio/nvme/fio_plugin.o 00:02:04.202 CC test/nvme/reset/reset.o 00:02:04.202 CXX test/cpp_headers/assert.o 00:02:04.202 CXX test/cpp_headers/accel_module.o 00:02:04.202 CXX test/cpp_headers/bdev.o 00:02:04.202 LINK spdk_nvme_discover 00:02:04.202 CXX test/cpp_headers/barrier.o 00:02:04.202 CXX test/cpp_headers/bit_array.o 00:02:04.202 CC test/nvme/compliance/nvme_compliance.o 00:02:04.202 CXX test/cpp_headers/base64.o 00:02:04.202 CXX test/cpp_headers/bdev_module.o 00:02:04.202 CXX test/cpp_headers/blobfs_bdev.o 00:02:04.202 CXX test/cpp_headers/bdev_zone.o 00:02:04.202 CXX test/cpp_headers/bit_pool.o 00:02:04.202 CC test/nvme/aer/aer.o 00:02:04.202 CC examples/nvme/reconnect/reconnect.o 00:02:04.202 CC test/event/scheduler/scheduler.o 00:02:04.202 CXX test/cpp_headers/blob_bdev.o 00:02:04.202 CC examples/nvme/hotplug/hotplug.o 00:02:04.202 CXX test/cpp_headers/config.o 00:02:04.202 CC test/nvme/reserve/reserve.o 00:02:04.202 CXX test/cpp_headers/cpuset.o 00:02:04.202 CXX test/cpp_headers/blobfs.o 00:02:04.202 CXX test/cpp_headers/crc16.o 00:02:04.202 CXX test/cpp_headers/blob.o 00:02:04.202 CXX test/cpp_headers/crc32.o 00:02:04.202 CC test/nvme/fdp/fdp.o 00:02:04.202 CC test/app/stub/stub.o 00:02:04.202 CXX test/cpp_headers/dif.o 00:02:04.202 CXX test/cpp_headers/dma.o 00:02:04.202 CXX test/cpp_headers/env_dpdk.o 00:02:04.202 CXX test/cpp_headers/conf.o 00:02:04.202 CXX test/cpp_headers/env.o 00:02:04.202 CXX test/cpp_headers/event.o 00:02:04.202 CXX test/cpp_headers/fd_group.o 00:02:04.202 CC test/app/bdev_svc/bdev_svc.o 00:02:04.202 CXX test/cpp_headers/file.o 00:02:04.202 CXX test/cpp_headers/gpt_spec.o 00:02:04.202 CXX test/cpp_headers/endian.o 00:02:04.202 CXX test/cpp_headers/crc64.o 00:02:04.202 CC test/dma/test_dma/test_dma.o 00:02:04.202 CXX test/cpp_headers/fd.o 00:02:04.202 CXX test/cpp_headers/histogram_data.o 00:02:04.464 CXX test/cpp_headers/ftl.o 00:02:04.464 CXX test/cpp_headers/hexlify.o 00:02:04.464 CC test/nvme/fused_ordering/fused_ordering.o 00:02:04.464 CC test/event/reactor/reactor.o 00:02:04.464 CXX test/cpp_headers/ioat_spec.o 00:02:04.464 CXX test/cpp_headers/idxd.o 00:02:04.464 LINK interrupt_tgt 00:02:04.464 LINK rpc_client_test 00:02:04.464 CXX test/cpp_headers/jsonrpc.o 00:02:04.464 CXX test/cpp_headers/idxd_spec.o 00:02:04.464 CXX test/cpp_headers/keyring.o 00:02:04.464 CXX test/cpp_headers/ioat.o 00:02:04.464 CXX test/cpp_headers/init.o 00:02:04.464 CC examples/accel/perf/accel_perf.o 00:02:04.464 CXX test/cpp_headers/keyring_module.o 00:02:04.464 CXX test/cpp_headers/iscsi_spec.o 00:02:04.464 CC examples/nvmf/nvmf/nvmf.o 00:02:04.464 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:04.464 CXX test/cpp_headers/json.o 00:02:04.464 CC test/nvme/overhead/overhead.o 00:02:04.464 CXX test/cpp_headers/lvol.o 00:02:04.464 CXX test/cpp_headers/memory.o 00:02:04.464 CXX test/cpp_headers/mmio.o 00:02:04.464 LINK vhost 00:02:04.464 CXX test/cpp_headers/nbd.o 00:02:04.464 CC test/nvme/startup/startup.o 00:02:04.464 CXX test/cpp_headers/likely.o 00:02:04.464 LINK spdk_trace_record 00:02:04.464 CXX test/cpp_headers/nvme.o 00:02:04.464 LINK iscsi_tgt 00:02:04.464 CXX test/cpp_headers/log.o 00:02:04.464 CC test/blobfs/mkfs/mkfs.o 00:02:04.464 CXX test/cpp_headers/notify.o 00:02:04.464 CXX test/cpp_headers/nvme_intel.o 00:02:04.464 CC examples/vmd/lsvmd/lsvmd.o 00:02:04.464 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:04.464 CXX test/cpp_headers/nvme_ocssd.o 00:02:04.464 CXX test/cpp_headers/nvme_spec.o 00:02:04.464 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:04.464 CXX test/cpp_headers/nvme_zns.o 00:02:04.464 CXX test/cpp_headers/nvmf_transport.o 00:02:04.464 CXX test/cpp_headers/nvmf_cmd.o 00:02:04.464 CXX test/cpp_headers/opal_spec.o 00:02:04.464 LINK histogram_perf 00:02:04.464 CXX test/cpp_headers/nvmf.o 00:02:04.464 CXX test/cpp_headers/nvmf_spec.o 00:02:04.464 CXX test/cpp_headers/opal.o 00:02:04.464 CXX test/cpp_headers/pci_ids.o 00:02:04.464 CXX test/cpp_headers/rpc.o 00:02:04.464 LINK spdk_tgt 00:02:04.464 CC examples/ioat/verify/verify.o 00:02:04.464 CC examples/thread/thread/thread_ex.o 00:02:04.464 CXX test/cpp_headers/pipe.o 00:02:04.464 CC test/nvme/boot_partition/boot_partition.o 00:02:04.464 CC test/env/mem_callbacks/mem_callbacks.o 00:02:04.464 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:04.464 CXX test/cpp_headers/reduce.o 00:02:04.464 LINK nvmf_tgt 00:02:04.464 CXX test/cpp_headers/scheduler.o 00:02:04.464 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:04.464 CC app/fio/bdev/fio_plugin.o 00:02:04.464 CXX test/cpp_headers/queue.o 00:02:04.464 LINK poller_perf 00:02:04.464 LINK vtophys 00:02:04.464 LINK jsoncat 00:02:04.464 LINK app_repeat 00:02:04.464 CC examples/bdev/hello_world/hello_bdev.o 00:02:04.464 LINK err_injection 00:02:04.724 CXX test/cpp_headers/scsi.o 00:02:04.724 CXX test/cpp_headers/sock.o 00:02:04.724 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:04.724 CXX test/cpp_headers/scsi_spec.o 00:02:04.724 LINK sgl 00:02:04.724 LINK ioat_perf 00:02:04.724 CC examples/blob/cli/blobcli.o 00:02:04.724 LINK spdk_trace 00:02:04.724 LINK hello_blob 00:02:04.724 LINK hello_sock 00:02:04.724 LINK pmr_persistence 00:02:04.724 LINK connect_stress 00:02:04.724 CXX test/cpp_headers/stdinc.o 00:02:04.724 LINK reset 00:02:04.724 LINK stub 00:02:04.724 CXX test/cpp_headers/string.o 00:02:04.724 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:04.724 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:04.724 CXX test/cpp_headers/thread.o 00:02:04.724 CXX test/cpp_headers/trace.o 00:02:04.724 LINK nvme_dp 00:02:04.724 CXX test/cpp_headers/trace_parser.o 00:02:04.724 CXX test/cpp_headers/tree.o 00:02:04.724 LINK aer 00:02:04.724 CXX test/cpp_headers/ublk.o 00:02:04.724 LINK reserve 00:02:04.724 CXX test/cpp_headers/util.o 00:02:04.724 CXX test/cpp_headers/uuid.o 00:02:04.724 LINK idxd_perf 00:02:04.724 CC test/lvol/esnap/esnap.o 00:02:04.724 CXX test/cpp_headers/version.o 00:02:04.984 LINK scheduler 00:02:04.984 CXX test/cpp_headers/vfio_user_pci.o 00:02:04.984 CXX test/cpp_headers/vfio_user_spec.o 00:02:04.984 LINK spdk_dd 00:02:04.984 CXX test/cpp_headers/vhost.o 00:02:04.984 LINK fused_ordering 00:02:04.984 CXX test/cpp_headers/vmd.o 00:02:04.984 LINK hotplug 00:02:04.984 CXX test/cpp_headers/xor.o 00:02:04.984 CXX test/cpp_headers/zipf.o 00:02:04.984 LINK dif 00:02:04.984 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:04.984 LINK arbitration 00:02:04.984 LINK cmb_copy 00:02:04.984 LINK abort 00:02:04.984 LINK mkfs 00:02:04.984 LINK nvme_compliance 00:02:04.984 LINK thread 00:02:04.984 LINK bdevio 00:02:04.984 LINK pci_ut 00:02:04.984 LINK spdk_nvme_perf 00:02:04.984 LINK reactor 00:02:05.243 LINK led 00:02:05.243 LINK spdk_top 00:02:05.243 LINK accel_perf 00:02:05.243 LINK nvme_fuzz 00:02:05.243 LINK zipf 00:02:05.243 LINK event_perf 00:02:05.243 LINK lsvmd 00:02:05.243 LINK reactor_perf 00:02:05.243 LINK mem_callbacks 00:02:05.243 LINK verify 00:02:05.243 LINK env_dpdk_post_init 00:02:05.243 LINK startup 00:02:05.243 LINK bdev_svc 00:02:05.243 LINK hello_bdev 00:02:05.243 LINK vhost_fuzz 00:02:05.243 LINK hello_world 00:02:05.543 LINK doorbell_aers 00:02:05.543 LINK blobcli 00:02:05.543 LINK simple_copy 00:02:05.543 LINK fdp 00:02:05.543 LINK memory_ut 00:02:05.543 LINK boot_partition 00:02:05.543 LINK bdevperf 00:02:05.543 LINK overhead 00:02:05.543 LINK nvmf 00:02:05.543 LINK test_dma 00:02:05.543 LINK reconnect 00:02:05.543 LINK cuse 00:02:05.807 LINK nvme_manage 00:02:05.807 LINK spdk_bdev 00:02:05.807 LINK spdk_nvme 00:02:05.807 LINK spdk_nvme_identify 00:02:06.380 LINK iscsi_fuzz 00:02:08.927 LINK esnap 00:02:09.188 00:02:09.188 real 0m49.485s 00:02:09.188 user 6m36.149s 00:02:09.188 sys 4m32.354s 00:02:09.188 20:33:33 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:09.188 20:33:33 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.188 ************************************ 00:02:09.188 END TEST make 00:02:09.188 ************************************ 00:02:09.188 20:33:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:09.188 20:33:33 -- pm/common@30 -- $ signal_monitor_resources TERM 00:02:09.189 20:33:33 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:02:09.189 20:33:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.189 20:33:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:09.189 20:33:33 -- pm/common@45 -- $ pid=2446041 00:02:09.189 20:33:33 -- pm/common@52 -- $ sudo kill -TERM 2446041 00:02:09.189 20:33:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.189 20:33:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:09.189 20:33:33 -- pm/common@45 -- $ pid=2446046 00:02:09.189 20:33:33 -- pm/common@52 -- $ sudo kill -TERM 2446046 00:02:09.189 20:33:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.189 20:33:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:09.189 20:33:33 -- pm/common@45 -- $ pid=2446045 00:02:09.189 20:33:33 -- pm/common@52 -- $ sudo kill -TERM 2446045 00:02:09.450 20:33:33 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.450 20:33:33 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:09.450 20:33:33 -- pm/common@45 -- $ pid=2446047 00:02:09.450 20:33:33 -- pm/common@52 -- $ sudo kill -TERM 2446047 00:02:09.450 20:33:34 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:09.450 20:33:34 -- nvmf/common.sh@7 -- # uname -s 00:02:09.450 20:33:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:09.450 20:33:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:09.450 20:33:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:09.450 20:33:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:09.450 20:33:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:09.450 20:33:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:09.450 20:33:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:09.450 20:33:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:09.450 20:33:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:09.450 20:33:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:09.450 20:33:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:02:09.450 20:33:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:02:09.450 20:33:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:09.450 20:33:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:09.450 20:33:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:09.450 20:33:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:09.450 20:33:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:09.450 20:33:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:09.450 20:33:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:09.450 20:33:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:09.450 20:33:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.450 20:33:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.450 20:33:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.450 20:33:34 -- paths/export.sh@5 -- # export PATH 00:02:09.450 20:33:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:09.450 20:33:34 -- nvmf/common.sh@47 -- # : 0 00:02:09.450 20:33:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:09.450 20:33:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:09.450 20:33:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:09.450 20:33:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:09.450 20:33:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:09.450 20:33:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:09.450 20:33:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:09.450 20:33:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:09.450 20:33:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:09.450 20:33:34 -- spdk/autotest.sh@32 -- # uname -s 00:02:09.450 20:33:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:09.450 20:33:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:09.450 20:33:34 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:09.450 20:33:34 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:09.450 20:33:34 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:09.450 20:33:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:09.716 20:33:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:09.716 20:33:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:09.716 20:33:34 -- spdk/autotest.sh@48 -- # udevadm_pid=2508802 00:02:09.716 20:33:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:09.716 20:33:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:09.716 20:33:34 -- pm/common@17 -- # local monitor 00:02:09.716 20:33:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.716 20:33:34 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2508803 00:02:09.716 20:33:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.716 20:33:34 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2508805 00:02:09.716 20:33:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.716 20:33:34 -- pm/common@21 -- # date +%s 00:02:09.716 20:33:34 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2508809 00:02:09.716 20:33:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:09.716 20:33:34 -- pm/common@21 -- # date +%s 00:02:09.716 20:33:34 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2508812 00:02:09.716 20:33:34 -- pm/common@26 -- # sleep 1 00:02:09.716 20:33:34 -- pm/common@21 -- # date +%s 00:02:09.716 20:33:34 -- pm/common@21 -- # date +%s 00:02:09.716 20:33:34 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713983614 00:02:09.716 20:33:34 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713983614 00:02:09.716 20:33:34 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713983614 00:02:09.716 20:33:34 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1713983614 00:02:09.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713983614_collect-vmstat.pm.log 00:02:09.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713983614_collect-bmc-pm.bmc.pm.log 00:02:09.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713983614_collect-cpu-load.pm.log 00:02:09.716 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1713983614_collect-cpu-temp.pm.log 00:02:10.658 20:33:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:10.658 20:33:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:10.658 20:33:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:10.658 20:33:35 -- common/autotest_common.sh@10 -- # set +x 00:02:10.658 20:33:35 -- spdk/autotest.sh@59 -- # create_test_list 00:02:10.658 20:33:35 -- common/autotest_common.sh@734 -- # xtrace_disable 00:02:10.658 20:33:35 -- common/autotest_common.sh@10 -- # set +x 00:02:10.658 20:33:35 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:10.658 20:33:35 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.658 20:33:35 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.658 20:33:35 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:10.659 20:33:35 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:10.659 20:33:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:10.659 20:33:35 -- common/autotest_common.sh@1441 -- # uname 00:02:10.659 20:33:35 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:02:10.659 20:33:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:10.659 20:33:35 -- common/autotest_common.sh@1461 -- # uname 00:02:10.659 20:33:35 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:02:10.659 20:33:35 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:10.659 20:33:35 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:10.659 20:33:35 -- spdk/autotest.sh@72 -- # hash lcov 00:02:10.659 20:33:35 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:10.659 20:33:35 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:10.659 --rc lcov_branch_coverage=1 00:02:10.659 --rc lcov_function_coverage=1 00:02:10.659 --rc genhtml_branch_coverage=1 00:02:10.659 --rc genhtml_function_coverage=1 00:02:10.659 --rc genhtml_legend=1 00:02:10.659 --rc geninfo_all_blocks=1 00:02:10.659 ' 00:02:10.659 20:33:35 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:10.659 --rc lcov_branch_coverage=1 00:02:10.659 --rc lcov_function_coverage=1 00:02:10.659 --rc genhtml_branch_coverage=1 00:02:10.659 --rc genhtml_function_coverage=1 00:02:10.659 --rc genhtml_legend=1 00:02:10.659 --rc geninfo_all_blocks=1 00:02:10.659 ' 00:02:10.659 20:33:35 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:10.659 --rc lcov_branch_coverage=1 00:02:10.659 --rc lcov_function_coverage=1 00:02:10.659 --rc genhtml_branch_coverage=1 00:02:10.659 --rc genhtml_function_coverage=1 00:02:10.659 --rc genhtml_legend=1 00:02:10.659 --rc geninfo_all_blocks=1 00:02:10.659 --no-external' 00:02:10.659 20:33:35 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:10.659 --rc lcov_branch_coverage=1 00:02:10.659 --rc lcov_function_coverage=1 00:02:10.659 --rc genhtml_branch_coverage=1 00:02:10.659 --rc genhtml_function_coverage=1 00:02:10.659 --rc genhtml_legend=1 00:02:10.659 --rc geninfo_all_blocks=1 00:02:10.659 --no-external' 00:02:10.659 20:33:35 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:10.659 lcov: LCOV version 1.14 00:02:10.659 20:33:35 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:18.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:18.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:18.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:18.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:18.806 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:23.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:23.014 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:33.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:33.009 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:33.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:33.009 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:33.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:33.009 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:39.610 20:34:04 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:39.610 20:34:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:02:39.610 20:34:04 -- common/autotest_common.sh@10 -- # set +x 00:02:39.610 20:34:04 -- spdk/autotest.sh@91 -- # rm -f 00:02:39.610 20:34:04 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:42.907 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:02:42.907 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:02:42.907 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:02:42.907 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:02:42.907 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:02:42.907 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:02:42.907 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:02:43.167 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:02:43.167 0000:65:00.0 (144d a80a): Already using the nvme driver 00:02:43.167 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:02:43.167 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:02:43.167 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:02:43.167 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:02:43.167 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:02:43.167 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:02:43.167 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:02:43.167 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:02:43.428 20:34:08 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:43.428 20:34:08 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:43.428 20:34:08 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:43.428 20:34:08 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:43.428 20:34:08 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:43.428 20:34:08 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:43.428 20:34:08 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:43.428 20:34:08 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:43.428 20:34:08 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:43.428 20:34:08 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:43.428 20:34:08 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:43.428 20:34:08 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:43.428 20:34:08 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:43.428 20:34:08 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:43.428 20:34:08 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:43.689 No valid GPT data, bailing 00:02:43.689 20:34:08 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:43.689 20:34:08 -- scripts/common.sh@391 -- # pt= 00:02:43.689 20:34:08 -- scripts/common.sh@392 -- # return 1 00:02:43.689 20:34:08 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:43.689 1+0 records in 00:02:43.689 1+0 records out 00:02:43.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00327164 s, 321 MB/s 00:02:43.689 20:34:08 -- spdk/autotest.sh@118 -- # sync 00:02:43.689 20:34:08 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:43.689 20:34:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:43.689 20:34:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:51.823 20:34:16 -- spdk/autotest.sh@124 -- # uname -s 00:02:51.823 20:34:16 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:51.823 20:34:16 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:51.823 20:34:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:51.823 20:34:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:51.823 20:34:16 -- common/autotest_common.sh@10 -- # set +x 00:02:51.823 ************************************ 00:02:51.823 START TEST setup.sh 00:02:51.823 ************************************ 00:02:51.823 20:34:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:51.823 * Looking for test storage... 00:02:51.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:51.823 20:34:16 -- setup/test-setup.sh@10 -- # uname -s 00:02:51.823 20:34:16 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:51.823 20:34:16 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:51.823 20:34:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:51.823 20:34:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:51.823 20:34:16 -- common/autotest_common.sh@10 -- # set +x 00:02:52.084 ************************************ 00:02:52.084 START TEST acl 00:02:52.084 ************************************ 00:02:52.084 20:34:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:52.084 * Looking for test storage... 00:02:52.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:52.084 20:34:16 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:52.084 20:34:16 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:02:52.084 20:34:16 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:02:52.084 20:34:16 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:02:52.084 20:34:16 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:02:52.084 20:34:16 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:02:52.084 20:34:16 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:02:52.084 20:34:16 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:52.084 20:34:16 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:02:52.084 20:34:16 -- setup/acl.sh@12 -- # devs=() 00:02:52.084 20:34:16 -- setup/acl.sh@12 -- # declare -a devs 00:02:52.084 20:34:16 -- setup/acl.sh@13 -- # drivers=() 00:02:52.084 20:34:16 -- setup/acl.sh@13 -- # declare -A drivers 00:02:52.084 20:34:16 -- setup/acl.sh@51 -- # setup reset 00:02:52.084 20:34:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:52.084 20:34:16 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.285 20:34:20 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:56.285 20:34:20 -- setup/acl.sh@16 -- # local dev driver 00:02:56.285 20:34:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.285 20:34:20 -- setup/acl.sh@15 -- # setup output status 00:02:56.285 20:34:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.285 20:34:20 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:59.586 Hugepages 00:02:59.586 node hugesize free / total 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # continue 00:02:59.586 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # continue 00:02:59.586 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # continue 00:02:59.586 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.586 00:02:59.586 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # continue 00:02:59.586 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.586 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.586 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.586 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.586 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.586 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.586 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.586 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.586 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.586 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.586 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.846 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:02:59.846 20:34:24 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:59.846 20:34:24 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:02:59.846 20:34:24 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:59.846 20:34:24 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:59.847 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.847 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.847 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.847 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.847 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.847 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.847 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.847 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.847 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.847 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.847 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.847 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.847 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.847 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.847 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.847 20:34:24 -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:59.847 20:34:24 -- setup/acl.sh@20 -- # continue 00:02:59.847 20:34:24 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:59.847 20:34:24 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:59.847 20:34:24 -- setup/acl.sh@54 -- # run_test denied denied 00:02:59.847 20:34:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:59.847 20:34:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:59.847 20:34:24 -- common/autotest_common.sh@10 -- # set +x 00:02:59.847 ************************************ 00:02:59.847 START TEST denied 00:02:59.847 ************************************ 00:02:59.847 20:34:24 -- common/autotest_common.sh@1111 -- # denied 00:02:59.847 20:34:24 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:02:59.847 20:34:24 -- setup/acl.sh@38 -- # setup output config 00:02:59.847 20:34:24 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:02:59.847 20:34:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.847 20:34:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:04.052 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:04.053 20:34:28 -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:04.053 20:34:28 -- setup/acl.sh@28 -- # local dev driver 00:03:04.053 20:34:28 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:04.053 20:34:28 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:04.053 20:34:28 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:04.053 20:34:28 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:04.053 20:34:28 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:04.053 20:34:28 -- setup/acl.sh@41 -- # setup reset 00:03:04.053 20:34:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.053 20:34:28 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.388 00:03:09.388 real 0m8.878s 00:03:09.388 user 0m2.940s 00:03:09.388 sys 0m5.097s 00:03:09.388 20:34:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:09.388 20:34:33 -- common/autotest_common.sh@10 -- # set +x 00:03:09.388 ************************************ 00:03:09.388 END TEST denied 00:03:09.388 ************************************ 00:03:09.388 20:34:33 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:09.388 20:34:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:09.388 20:34:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:09.388 20:34:33 -- common/autotest_common.sh@10 -- # set +x 00:03:09.388 ************************************ 00:03:09.388 START TEST allowed 00:03:09.388 ************************************ 00:03:09.388 20:34:33 -- common/autotest_common.sh@1111 -- # allowed 00:03:09.388 20:34:33 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:09.388 20:34:33 -- setup/acl.sh@45 -- # setup output config 00:03:09.388 20:34:33 -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:09.388 20:34:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.388 20:34:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:14.675 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:14.675 20:34:39 -- setup/acl.sh@47 -- # verify 00:03:14.675 20:34:39 -- setup/acl.sh@28 -- # local dev driver 00:03:14.675 20:34:39 -- setup/acl.sh@48 -- # setup reset 00:03:14.675 20:34:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.675 20:34:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.888 00:03:18.888 real 0m9.485s 00:03:18.888 user 0m2.728s 00:03:18.888 sys 0m4.994s 00:03:18.888 20:34:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:18.888 20:34:43 -- common/autotest_common.sh@10 -- # set +x 00:03:18.888 ************************************ 00:03:18.888 END TEST allowed 00:03:18.888 ************************************ 00:03:18.888 00:03:18.888 real 0m26.471s 00:03:18.888 user 0m8.700s 00:03:18.888 sys 0m15.327s 00:03:18.888 20:34:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:18.888 20:34:43 -- common/autotest_common.sh@10 -- # set +x 00:03:18.888 ************************************ 00:03:18.888 END TEST acl 00:03:18.888 ************************************ 00:03:18.888 20:34:43 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.888 20:34:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:18.888 20:34:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:18.888 20:34:43 -- common/autotest_common.sh@10 -- # set +x 00:03:18.888 ************************************ 00:03:18.888 START TEST hugepages 00:03:18.888 ************************************ 00:03:18.888 20:34:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:18.888 * Looking for test storage... 00:03:18.888 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.888 20:34:43 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:18.888 20:34:43 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:18.888 20:34:43 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:18.888 20:34:43 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:18.888 20:34:43 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:18.888 20:34:43 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:18.888 20:34:43 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:18.888 20:34:43 -- setup/common.sh@18 -- # local node= 00:03:18.888 20:34:43 -- setup/common.sh@19 -- # local var val 00:03:18.888 20:34:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:18.888 20:34:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.888 20:34:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.888 20:34:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.888 20:34:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.888 20:34:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 102285476 kB' 'MemAvailable: 106586420 kB' 'Buffers: 8520 kB' 'Cached: 14944808 kB' 'SwapCached: 0 kB' 'Active: 11899704 kB' 'Inactive: 3674916 kB' 'Active(anon): 10772192 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 624544 kB' 'Mapped: 202392 kB' 'Shmem: 10150900 kB' 'KReclaimable: 539460 kB' 'Slab: 1318356 kB' 'SReclaimable: 539460 kB' 'SUnreclaim: 778896 kB' 'KernelStack: 27440 kB' 'PageTables: 9260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460872 kB' 'Committed_AS: 12286072 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235708 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.888 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.888 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # continue 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:18.889 20:34:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:18.889 20:34:43 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:18.889 20:34:43 -- setup/common.sh@33 -- # echo 2048 00:03:18.889 20:34:43 -- setup/common.sh@33 -- # return 0 00:03:18.889 20:34:43 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:18.889 20:34:43 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:18.889 20:34:43 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:18.889 20:34:43 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:18.889 20:34:43 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:18.889 20:34:43 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:18.889 20:34:43 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:18.889 20:34:43 -- setup/hugepages.sh@207 -- # get_nodes 00:03:18.889 20:34:43 -- setup/hugepages.sh@27 -- # local node 00:03:18.889 20:34:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.889 20:34:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:18.889 20:34:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.889 20:34:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.889 20:34:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.889 20:34:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.889 20:34:43 -- setup/hugepages.sh@208 -- # clear_hp 00:03:18.889 20:34:43 -- setup/hugepages.sh@37 -- # local node hp 00:03:18.889 20:34:43 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.889 20:34:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.889 20:34:43 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.889 20:34:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.889 20:34:43 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.889 20:34:43 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.889 20:34:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.889 20:34:43 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.890 20:34:43 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.890 20:34:43 -- setup/hugepages.sh@41 -- # echo 0 00:03:18.890 20:34:43 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:18.890 20:34:43 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:18.890 20:34:43 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:18.890 20:34:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:18.890 20:34:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:18.890 20:34:43 -- common/autotest_common.sh@10 -- # set +x 00:03:18.890 ************************************ 00:03:18.890 START TEST default_setup 00:03:18.890 ************************************ 00:03:18.890 20:34:43 -- common/autotest_common.sh@1111 -- # default_setup 00:03:18.890 20:34:43 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:18.890 20:34:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.890 20:34:43 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:18.890 20:34:43 -- setup/hugepages.sh@51 -- # shift 00:03:18.890 20:34:43 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:18.890 20:34:43 -- setup/hugepages.sh@52 -- # local node_ids 00:03:18.890 20:34:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.890 20:34:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.890 20:34:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:18.890 20:34:43 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:18.890 20:34:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.890 20:34:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.890 20:34:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.890 20:34:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.890 20:34:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.890 20:34:43 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:18.890 20:34:43 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:18.890 20:34:43 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:18.890 20:34:43 -- setup/hugepages.sh@73 -- # return 0 00:03:18.890 20:34:43 -- setup/hugepages.sh@137 -- # setup output 00:03:18.890 20:34:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.890 20:34:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.194 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:22.194 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:22.769 20:34:47 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:22.769 20:34:47 -- setup/hugepages.sh@89 -- # local node 00:03:22.769 20:34:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.769 20:34:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.769 20:34:47 -- setup/hugepages.sh@92 -- # local surp 00:03:22.769 20:34:47 -- setup/hugepages.sh@93 -- # local resv 00:03:22.769 20:34:47 -- setup/hugepages.sh@94 -- # local anon 00:03:22.769 20:34:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.769 20:34:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.769 20:34:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.769 20:34:47 -- setup/common.sh@18 -- # local node= 00:03:22.769 20:34:47 -- setup/common.sh@19 -- # local var val 00:03:22.769 20:34:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.769 20:34:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.769 20:34:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.769 20:34:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.769 20:34:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.769 20:34:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.769 20:34:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104455356 kB' 'MemAvailable: 108756236 kB' 'Buffers: 8520 kB' 'Cached: 14944928 kB' 'SwapCached: 0 kB' 'Active: 11916628 kB' 'Inactive: 3674916 kB' 'Active(anon): 10789116 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641456 kB' 'Mapped: 202700 kB' 'Shmem: 10151020 kB' 'KReclaimable: 539396 kB' 'Slab: 1316304 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 776908 kB' 'KernelStack: 27552 kB' 'PageTables: 9384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12287064 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235612 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.769 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.769 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.770 20:34:47 -- setup/common.sh@33 -- # echo 0 00:03:22.770 20:34:47 -- setup/common.sh@33 -- # return 0 00:03:22.770 20:34:47 -- setup/hugepages.sh@97 -- # anon=0 00:03:22.770 20:34:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.770 20:34:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.770 20:34:47 -- setup/common.sh@18 -- # local node= 00:03:22.770 20:34:47 -- setup/common.sh@19 -- # local var val 00:03:22.770 20:34:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.770 20:34:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.770 20:34:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.770 20:34:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.770 20:34:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.770 20:34:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104455020 kB' 'MemAvailable: 108755900 kB' 'Buffers: 8520 kB' 'Cached: 14944944 kB' 'SwapCached: 0 kB' 'Active: 11917432 kB' 'Inactive: 3674916 kB' 'Active(anon): 10789920 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642296 kB' 'Mapped: 202700 kB' 'Shmem: 10151036 kB' 'KReclaimable: 539396 kB' 'Slab: 1316284 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 776888 kB' 'KernelStack: 27552 kB' 'PageTables: 9432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12287444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235564 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.770 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.770 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.771 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.771 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.772 20:34:47 -- setup/common.sh@33 -- # echo 0 00:03:22.772 20:34:47 -- setup/common.sh@33 -- # return 0 00:03:22.772 20:34:47 -- setup/hugepages.sh@99 -- # surp=0 00:03:22.772 20:34:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.772 20:34:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.772 20:34:47 -- setup/common.sh@18 -- # local node= 00:03:22.772 20:34:47 -- setup/common.sh@19 -- # local var val 00:03:22.772 20:34:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.772 20:34:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.772 20:34:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.772 20:34:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.772 20:34:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.772 20:34:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104457332 kB' 'MemAvailable: 108758212 kB' 'Buffers: 8520 kB' 'Cached: 14944956 kB' 'SwapCached: 0 kB' 'Active: 11916632 kB' 'Inactive: 3674916 kB' 'Active(anon): 10789120 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641512 kB' 'Mapped: 202652 kB' 'Shmem: 10151048 kB' 'KReclaimable: 539396 kB' 'Slab: 1316364 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 776968 kB' 'KernelStack: 27536 kB' 'PageTables: 9368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12287460 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235564 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.772 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.772 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.773 20:34:47 -- setup/common.sh@33 -- # echo 0 00:03:22.773 20:34:47 -- setup/common.sh@33 -- # return 0 00:03:22.773 20:34:47 -- setup/hugepages.sh@100 -- # resv=0 00:03:22.773 20:34:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.773 nr_hugepages=1024 00:03:22.773 20:34:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.773 resv_hugepages=0 00:03:22.773 20:34:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.773 surplus_hugepages=0 00:03:22.773 20:34:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.773 anon_hugepages=0 00:03:22.773 20:34:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.773 20:34:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.773 20:34:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.773 20:34:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.773 20:34:47 -- setup/common.sh@18 -- # local node= 00:03:22.773 20:34:47 -- setup/common.sh@19 -- # local var val 00:03:22.773 20:34:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.773 20:34:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.773 20:34:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.773 20:34:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.773 20:34:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.773 20:34:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104457688 kB' 'MemAvailable: 108758568 kB' 'Buffers: 8520 kB' 'Cached: 14944972 kB' 'SwapCached: 0 kB' 'Active: 11916784 kB' 'Inactive: 3674916 kB' 'Active(anon): 10789272 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641648 kB' 'Mapped: 202652 kB' 'Shmem: 10151064 kB' 'KReclaimable: 539396 kB' 'Slab: 1316364 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 776968 kB' 'KernelStack: 27504 kB' 'PageTables: 9272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12287472 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235564 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.773 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.773 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.774 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.774 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.775 20:34:47 -- setup/common.sh@33 -- # echo 1024 00:03:22.775 20:34:47 -- setup/common.sh@33 -- # return 0 00:03:22.775 20:34:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.775 20:34:47 -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.775 20:34:47 -- setup/hugepages.sh@27 -- # local node 00:03:22.775 20:34:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.775 20:34:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.775 20:34:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.775 20:34:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.775 20:34:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.775 20:34:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.775 20:34:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.775 20:34:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.775 20:34:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.775 20:34:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.775 20:34:47 -- setup/common.sh@18 -- # local node=0 00:03:22.775 20:34:47 -- setup/common.sh@19 -- # local var val 00:03:22.775 20:34:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:22.775 20:34:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.775 20:34:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.775 20:34:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.775 20:34:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.775 20:34:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 54791080 kB' 'MemUsed: 10867944 kB' 'SwapCached: 0 kB' 'Active: 6006752 kB' 'Inactive: 190144 kB' 'Active(anon): 5417780 kB' 'Inactive(anon): 0 kB' 'Active(file): 588972 kB' 'Inactive(file): 190144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5731368 kB' 'Mapped: 160520 kB' 'AnonPages: 468768 kB' 'Shmem: 4952252 kB' 'KernelStack: 16104 kB' 'PageTables: 6120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320884 kB' 'Slab: 733312 kB' 'SReclaimable: 320884 kB' 'SUnreclaim: 412428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.775 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.775 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # continue 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:22.776 20:34:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:22.776 20:34:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.776 20:34:47 -- setup/common.sh@33 -- # echo 0 00:03:22.776 20:34:47 -- setup/common.sh@33 -- # return 0 00:03:22.776 20:34:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.776 20:34:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.776 20:34:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.776 20:34:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.776 20:34:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.776 node0=1024 expecting 1024 00:03:22.776 20:34:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.776 00:03:22.776 real 0m3.790s 00:03:22.776 user 0m1.369s 00:03:22.776 sys 0m2.397s 00:03:22.776 20:34:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:22.776 20:34:47 -- common/autotest_common.sh@10 -- # set +x 00:03:22.776 ************************************ 00:03:22.776 END TEST default_setup 00:03:22.776 ************************************ 00:03:22.776 20:34:47 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:22.776 20:34:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:22.776 20:34:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:22.776 20:34:47 -- common/autotest_common.sh@10 -- # set +x 00:03:23.037 ************************************ 00:03:23.037 START TEST per_node_1G_alloc 00:03:23.037 ************************************ 00:03:23.037 20:34:47 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:03:23.037 20:34:47 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:23.037 20:34:47 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:23.037 20:34:47 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:23.037 20:34:47 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:23.037 20:34:47 -- setup/hugepages.sh@51 -- # shift 00:03:23.037 20:34:47 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:23.037 20:34:47 -- setup/hugepages.sh@52 -- # local node_ids 00:03:23.037 20:34:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:23.037 20:34:47 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:23.037 20:34:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:23.037 20:34:47 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:23.037 20:34:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:23.037 20:34:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:23.037 20:34:47 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:23.037 20:34:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:23.037 20:34:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:23.037 20:34:47 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:23.037 20:34:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.037 20:34:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:23.037 20:34:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:23.037 20:34:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:23.037 20:34:47 -- setup/hugepages.sh@73 -- # return 0 00:03:23.037 20:34:47 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:23.037 20:34:47 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:23.037 20:34:47 -- setup/hugepages.sh@146 -- # setup output 00:03:23.037 20:34:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.037 20:34:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.336 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:26.336 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:26.336 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:26.602 20:34:51 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:26.602 20:34:51 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:26.602 20:34:51 -- setup/hugepages.sh@89 -- # local node 00:03:26.602 20:34:51 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.602 20:34:51 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.602 20:34:51 -- setup/hugepages.sh@92 -- # local surp 00:03:26.602 20:34:51 -- setup/hugepages.sh@93 -- # local resv 00:03:26.602 20:34:51 -- setup/hugepages.sh@94 -- # local anon 00:03:26.602 20:34:51 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.602 20:34:51 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.602 20:34:51 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.602 20:34:51 -- setup/common.sh@18 -- # local node= 00:03:26.602 20:34:51 -- setup/common.sh@19 -- # local var val 00:03:26.602 20:34:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.602 20:34:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.602 20:34:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.602 20:34:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.602 20:34:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.602 20:34:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104460872 kB' 'MemAvailable: 108761752 kB' 'Buffers: 8520 kB' 'Cached: 14945072 kB' 'SwapCached: 0 kB' 'Active: 11917868 kB' 'Inactive: 3674916 kB' 'Active(anon): 10790356 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642140 kB' 'Mapped: 201800 kB' 'Shmem: 10151164 kB' 'KReclaimable: 539396 kB' 'Slab: 1315420 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 776024 kB' 'KernelStack: 27728 kB' 'PageTables: 9624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12277756 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235852 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.602 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.602 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.603 20:34:51 -- setup/common.sh@33 -- # echo 0 00:03:26.603 20:34:51 -- setup/common.sh@33 -- # return 0 00:03:26.603 20:34:51 -- setup/hugepages.sh@97 -- # anon=0 00:03:26.603 20:34:51 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.603 20:34:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.603 20:34:51 -- setup/common.sh@18 -- # local node= 00:03:26.603 20:34:51 -- setup/common.sh@19 -- # local var val 00:03:26.603 20:34:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.603 20:34:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.603 20:34:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.603 20:34:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.603 20:34:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.603 20:34:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104462980 kB' 'MemAvailable: 108763860 kB' 'Buffers: 8520 kB' 'Cached: 14945072 kB' 'SwapCached: 0 kB' 'Active: 11918076 kB' 'Inactive: 3674916 kB' 'Active(anon): 10790564 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642428 kB' 'Mapped: 201732 kB' 'Shmem: 10151164 kB' 'KReclaimable: 539396 kB' 'Slab: 1315364 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 775968 kB' 'KernelStack: 27840 kB' 'PageTables: 9764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12277768 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235852 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.603 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.603 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.604 20:34:51 -- setup/common.sh@33 -- # echo 0 00:03:26.604 20:34:51 -- setup/common.sh@33 -- # return 0 00:03:26.604 20:34:51 -- setup/hugepages.sh@99 -- # surp=0 00:03:26.604 20:34:51 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.604 20:34:51 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.604 20:34:51 -- setup/common.sh@18 -- # local node= 00:03:26.604 20:34:51 -- setup/common.sh@19 -- # local var val 00:03:26.604 20:34:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.604 20:34:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.604 20:34:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.604 20:34:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.604 20:34:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.604 20:34:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104463300 kB' 'MemAvailable: 108764180 kB' 'Buffers: 8520 kB' 'Cached: 14945072 kB' 'SwapCached: 0 kB' 'Active: 11916800 kB' 'Inactive: 3674916 kB' 'Active(anon): 10789288 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641584 kB' 'Mapped: 201656 kB' 'Shmem: 10151164 kB' 'KReclaimable: 539396 kB' 'Slab: 1315300 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 775904 kB' 'KernelStack: 27664 kB' 'PageTables: 9496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12277780 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235836 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.604 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.604 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.605 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.605 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.606 20:34:51 -- setup/common.sh@33 -- # echo 0 00:03:26.606 20:34:51 -- setup/common.sh@33 -- # return 0 00:03:26.606 20:34:51 -- setup/hugepages.sh@100 -- # resv=0 00:03:26.606 20:34:51 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.606 nr_hugepages=1024 00:03:26.606 20:34:51 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.606 resv_hugepages=0 00:03:26.606 20:34:51 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.606 surplus_hugepages=0 00:03:26.606 20:34:51 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.606 anon_hugepages=0 00:03:26.606 20:34:51 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.606 20:34:51 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.606 20:34:51 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.606 20:34:51 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.606 20:34:51 -- setup/common.sh@18 -- # local node= 00:03:26.606 20:34:51 -- setup/common.sh@19 -- # local var val 00:03:26.606 20:34:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.606 20:34:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.606 20:34:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.606 20:34:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.606 20:34:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.606 20:34:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104460680 kB' 'MemAvailable: 108761560 kB' 'Buffers: 8520 kB' 'Cached: 14945100 kB' 'SwapCached: 0 kB' 'Active: 11916888 kB' 'Inactive: 3674916 kB' 'Active(anon): 10789376 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641584 kB' 'Mapped: 201656 kB' 'Shmem: 10151192 kB' 'KReclaimable: 539396 kB' 'Slab: 1315300 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 775904 kB' 'KernelStack: 27632 kB' 'PageTables: 9252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12277796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235868 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.606 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.606 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.607 20:34:51 -- setup/common.sh@33 -- # echo 1024 00:03:26.607 20:34:51 -- setup/common.sh@33 -- # return 0 00:03:26.607 20:34:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.607 20:34:51 -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.607 20:34:51 -- setup/hugepages.sh@27 -- # local node 00:03:26.607 20:34:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.607 20:34:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.607 20:34:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.607 20:34:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.607 20:34:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.607 20:34:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.607 20:34:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.607 20:34:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.607 20:34:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.607 20:34:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.607 20:34:51 -- setup/common.sh@18 -- # local node=0 00:03:26.607 20:34:51 -- setup/common.sh@19 -- # local var val 00:03:26.607 20:34:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.607 20:34:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.607 20:34:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.607 20:34:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.607 20:34:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.607 20:34:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 55841272 kB' 'MemUsed: 9817752 kB' 'SwapCached: 0 kB' 'Active: 6008636 kB' 'Inactive: 190144 kB' 'Active(anon): 5419664 kB' 'Inactive(anon): 0 kB' 'Active(file): 588972 kB' 'Inactive(file): 190144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5731496 kB' 'Mapped: 160548 kB' 'AnonPages: 470680 kB' 'Shmem: 4952380 kB' 'KernelStack: 16088 kB' 'PageTables: 5920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320884 kB' 'Slab: 732832 kB' 'SReclaimable: 320884 kB' 'SUnreclaim: 411948 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.607 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.607 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.608 20:34:51 -- setup/common.sh@33 -- # echo 0 00:03:26.608 20:34:51 -- setup/common.sh@33 -- # return 0 00:03:26.608 20:34:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.608 20:34:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.608 20:34:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.608 20:34:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.608 20:34:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.608 20:34:51 -- setup/common.sh@18 -- # local node=1 00:03:26.608 20:34:51 -- setup/common.sh@19 -- # local var val 00:03:26.608 20:34:51 -- setup/common.sh@20 -- # local mem_f mem 00:03:26.608 20:34:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.608 20:34:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.608 20:34:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.608 20:34:51 -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.608 20:34:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.608 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.608 20:34:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679816 kB' 'MemFree: 48619592 kB' 'MemUsed: 12060224 kB' 'SwapCached: 0 kB' 'Active: 5908848 kB' 'Inactive: 3484772 kB' 'Active(anon): 5370308 kB' 'Inactive(anon): 0 kB' 'Active(file): 538540 kB' 'Inactive(file): 3484772 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9222140 kB' 'Mapped: 41108 kB' 'AnonPages: 171584 kB' 'Shmem: 5198828 kB' 'KernelStack: 11544 kB' 'PageTables: 3604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 218512 kB' 'Slab: 582468 kB' 'SReclaimable: 218512 kB' 'SUnreclaim: 363956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # continue 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # IFS=': ' 00:03:26.609 20:34:51 -- setup/common.sh@31 -- # read -r var val _ 00:03:26.609 20:34:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.609 20:34:51 -- setup/common.sh@33 -- # echo 0 00:03:26.609 20:34:51 -- setup/common.sh@33 -- # return 0 00:03:26.609 20:34:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.609 20:34:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.609 20:34:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.609 20:34:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.609 20:34:51 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.609 node0=512 expecting 512 00:03:26.609 20:34:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.609 20:34:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.609 20:34:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.609 20:34:51 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:26.609 node1=512 expecting 512 00:03:26.609 20:34:51 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:26.609 00:03:26.609 real 0m3.754s 00:03:26.609 user 0m1.361s 00:03:26.609 sys 0m2.428s 00:03:26.609 20:34:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:26.609 20:34:51 -- common/autotest_common.sh@10 -- # set +x 00:03:26.609 ************************************ 00:03:26.609 END TEST per_node_1G_alloc 00:03:26.609 ************************************ 00:03:26.870 20:34:51 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:26.870 20:34:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:26.870 20:34:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:26.870 20:34:51 -- common/autotest_common.sh@10 -- # set +x 00:03:26.870 ************************************ 00:03:26.870 START TEST even_2G_alloc 00:03:26.870 ************************************ 00:03:26.870 20:34:51 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:03:26.870 20:34:51 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:26.870 20:34:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.870 20:34:51 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.870 20:34:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.870 20:34:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.870 20:34:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.870 20:34:51 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.870 20:34:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.870 20:34:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.870 20:34:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.870 20:34:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.870 20:34:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.870 20:34:51 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.870 20:34:51 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.870 20:34:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.870 20:34:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.870 20:34:51 -- setup/hugepages.sh@83 -- # : 512 00:03:26.870 20:34:51 -- setup/hugepages.sh@84 -- # : 1 00:03:26.870 20:34:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.870 20:34:51 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.870 20:34:51 -- setup/hugepages.sh@83 -- # : 0 00:03:26.870 20:34:51 -- setup/hugepages.sh@84 -- # : 0 00:03:26.870 20:34:51 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.870 20:34:51 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:26.870 20:34:51 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:26.870 20:34:51 -- setup/hugepages.sh@153 -- # setup output 00:03:26.871 20:34:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.871 20:34:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.169 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:30.169 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:30.169 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:30.169 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:30.169 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:30.169 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:30.169 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:30.170 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:30.170 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:30.170 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:30.170 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:30.170 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:30.170 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:30.170 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:30.170 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:30.170 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:30.170 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:30.433 20:34:54 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:30.434 20:34:54 -- setup/hugepages.sh@89 -- # local node 00:03:30.434 20:34:54 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.434 20:34:54 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.434 20:34:54 -- setup/hugepages.sh@92 -- # local surp 00:03:30.434 20:34:54 -- setup/hugepages.sh@93 -- # local resv 00:03:30.434 20:34:54 -- setup/hugepages.sh@94 -- # local anon 00:03:30.434 20:34:54 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.434 20:34:54 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.434 20:34:54 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.434 20:34:54 -- setup/common.sh@18 -- # local node= 00:03:30.434 20:34:54 -- setup/common.sh@19 -- # local var val 00:03:30.434 20:34:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.434 20:34:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.434 20:34:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.434 20:34:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.434 20:34:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.434 20:34:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104447776 kB' 'MemAvailable: 108748656 kB' 'Buffers: 8520 kB' 'Cached: 14945220 kB' 'SwapCached: 0 kB' 'Active: 11918140 kB' 'Inactive: 3674916 kB' 'Active(anon): 10790628 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642672 kB' 'Mapped: 201860 kB' 'Shmem: 10151312 kB' 'KReclaimable: 539396 kB' 'Slab: 1315444 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 776048 kB' 'KernelStack: 27712 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12278552 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235948 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.434 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.434 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.435 20:34:54 -- setup/common.sh@33 -- # echo 0 00:03:30.435 20:34:54 -- setup/common.sh@33 -- # return 0 00:03:30.435 20:34:54 -- setup/hugepages.sh@97 -- # anon=0 00:03:30.435 20:34:54 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.435 20:34:54 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.435 20:34:54 -- setup/common.sh@18 -- # local node= 00:03:30.435 20:34:54 -- setup/common.sh@19 -- # local var val 00:03:30.435 20:34:54 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.435 20:34:54 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.435 20:34:54 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.435 20:34:54 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.435 20:34:54 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.435 20:34:54 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104449596 kB' 'MemAvailable: 108750476 kB' 'Buffers: 8520 kB' 'Cached: 14945220 kB' 'SwapCached: 0 kB' 'Active: 11917788 kB' 'Inactive: 3674916 kB' 'Active(anon): 10790276 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642268 kB' 'Mapped: 201696 kB' 'Shmem: 10151312 kB' 'KReclaimable: 539396 kB' 'Slab: 1315252 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 775856 kB' 'KernelStack: 27552 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12276932 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235916 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:54 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:54 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.435 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.435 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.436 20:34:55 -- setup/common.sh@33 -- # echo 0 00:03:30.436 20:34:55 -- setup/common.sh@33 -- # return 0 00:03:30.436 20:34:55 -- setup/hugepages.sh@99 -- # surp=0 00:03:30.436 20:34:55 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.436 20:34:55 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.436 20:34:55 -- setup/common.sh@18 -- # local node= 00:03:30.436 20:34:55 -- setup/common.sh@19 -- # local var val 00:03:30.436 20:34:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.436 20:34:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.436 20:34:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.436 20:34:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.436 20:34:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.436 20:34:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.436 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.436 20:34:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104450040 kB' 'MemAvailable: 108750920 kB' 'Buffers: 8520 kB' 'Cached: 14945232 kB' 'SwapCached: 0 kB' 'Active: 11917808 kB' 'Inactive: 3674916 kB' 'Active(anon): 10790296 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642232 kB' 'Mapped: 201696 kB' 'Shmem: 10151324 kB' 'KReclaimable: 539396 kB' 'Slab: 1315244 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 775848 kB' 'KernelStack: 27632 kB' 'PageTables: 9412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12278576 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235932 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.437 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.437 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.438 20:34:55 -- setup/common.sh@33 -- # echo 0 00:03:30.438 20:34:55 -- setup/common.sh@33 -- # return 0 00:03:30.438 20:34:55 -- setup/hugepages.sh@100 -- # resv=0 00:03:30.438 20:34:55 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.438 nr_hugepages=1024 00:03:30.438 20:34:55 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.438 resv_hugepages=0 00:03:30.438 20:34:55 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.438 surplus_hugepages=0 00:03:30.438 20:34:55 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.438 anon_hugepages=0 00:03:30.438 20:34:55 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.438 20:34:55 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.438 20:34:55 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.438 20:34:55 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.438 20:34:55 -- setup/common.sh@18 -- # local node= 00:03:30.438 20:34:55 -- setup/common.sh@19 -- # local var val 00:03:30.438 20:34:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.438 20:34:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.438 20:34:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.438 20:34:55 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.438 20:34:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.438 20:34:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104451932 kB' 'MemAvailable: 108752812 kB' 'Buffers: 8520 kB' 'Cached: 14945248 kB' 'SwapCached: 0 kB' 'Active: 11917428 kB' 'Inactive: 3674916 kB' 'Active(anon): 10789916 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641860 kB' 'Mapped: 201696 kB' 'Shmem: 10151340 kB' 'KReclaimable: 539396 kB' 'Slab: 1315244 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 775848 kB' 'KernelStack: 27408 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12275696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235820 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.438 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.438 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.439 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.439 20:34:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.740 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.740 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.741 20:34:55 -- setup/common.sh@33 -- # echo 1024 00:03:30.741 20:34:55 -- setup/common.sh@33 -- # return 0 00:03:30.741 20:34:55 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.741 20:34:55 -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.741 20:34:55 -- setup/hugepages.sh@27 -- # local node 00:03:30.741 20:34:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.741 20:34:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.741 20:34:55 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.741 20:34:55 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:30.741 20:34:55 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.741 20:34:55 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.741 20:34:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.741 20:34:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.741 20:34:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.741 20:34:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.741 20:34:55 -- setup/common.sh@18 -- # local node=0 00:03:30.741 20:34:55 -- setup/common.sh@19 -- # local var val 00:03:30.741 20:34:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.741 20:34:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.741 20:34:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.741 20:34:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.741 20:34:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.741 20:34:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 55833448 kB' 'MemUsed: 9825576 kB' 'SwapCached: 0 kB' 'Active: 6009980 kB' 'Inactive: 190144 kB' 'Active(anon): 5421008 kB' 'Inactive(anon): 0 kB' 'Active(file): 588972 kB' 'Inactive(file): 190144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5731620 kB' 'Mapped: 160588 kB' 'AnonPages: 471776 kB' 'Shmem: 4952504 kB' 'KernelStack: 16072 kB' 'PageTables: 5928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320884 kB' 'Slab: 732716 kB' 'SReclaimable: 320884 kB' 'SUnreclaim: 411832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.741 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.741 20:34:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@33 -- # echo 0 00:03:30.742 20:34:55 -- setup/common.sh@33 -- # return 0 00:03:30.742 20:34:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.742 20:34:55 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.742 20:34:55 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.742 20:34:55 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:30.742 20:34:55 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.742 20:34:55 -- setup/common.sh@18 -- # local node=1 00:03:30.742 20:34:55 -- setup/common.sh@19 -- # local var val 00:03:30.742 20:34:55 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.742 20:34:55 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.742 20:34:55 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:30.742 20:34:55 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:30.742 20:34:55 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.742 20:34:55 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679816 kB' 'MemFree: 48618728 kB' 'MemUsed: 12061088 kB' 'SwapCached: 0 kB' 'Active: 5907428 kB' 'Inactive: 3484772 kB' 'Active(anon): 5368888 kB' 'Inactive(anon): 0 kB' 'Active(file): 538540 kB' 'Inactive(file): 3484772 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9222148 kB' 'Mapped: 41108 kB' 'AnonPages: 170124 kB' 'Shmem: 5198836 kB' 'KernelStack: 11416 kB' 'PageTables: 3248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 218512 kB' 'Slab: 582464 kB' 'SReclaimable: 218512 kB' 'SUnreclaim: 363952 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.742 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.742 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # continue 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.743 20:34:55 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.743 20:34:55 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.743 20:34:55 -- setup/common.sh@33 -- # echo 0 00:03:30.743 20:34:55 -- setup/common.sh@33 -- # return 0 00:03:30.743 20:34:55 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.743 20:34:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.743 20:34:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.743 20:34:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.743 20:34:55 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:30.743 node0=512 expecting 512 00:03:30.743 20:34:55 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.743 20:34:55 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.743 20:34:55 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.743 20:34:55 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:30.743 node1=512 expecting 512 00:03:30.743 20:34:55 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:30.743 00:03:30.743 real 0m3.726s 00:03:30.743 user 0m1.423s 00:03:30.743 sys 0m2.306s 00:03:30.743 20:34:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:30.743 20:34:55 -- common/autotest_common.sh@10 -- # set +x 00:03:30.743 ************************************ 00:03:30.743 END TEST even_2G_alloc 00:03:30.743 ************************************ 00:03:30.743 20:34:55 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:30.743 20:34:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:30.743 20:34:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:30.743 20:34:55 -- common/autotest_common.sh@10 -- # set +x 00:03:30.743 ************************************ 00:03:30.743 START TEST odd_alloc 00:03:30.743 ************************************ 00:03:30.743 20:34:55 -- common/autotest_common.sh@1111 -- # odd_alloc 00:03:30.743 20:34:55 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:30.743 20:34:55 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:30.743 20:34:55 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:30.743 20:34:55 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:30.743 20:34:55 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:30.743 20:34:55 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:30.743 20:34:55 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:30.743 20:34:55 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:30.743 20:34:55 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:30.743 20:34:55 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:30.743 20:34:55 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:30.743 20:34:55 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:30.743 20:34:55 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:30.743 20:34:55 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:30.743 20:34:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.743 20:34:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:30.743 20:34:55 -- setup/hugepages.sh@83 -- # : 513 00:03:30.743 20:34:55 -- setup/hugepages.sh@84 -- # : 1 00:03:30.743 20:34:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.743 20:34:55 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:30.743 20:34:55 -- setup/hugepages.sh@83 -- # : 0 00:03:30.743 20:34:55 -- setup/hugepages.sh@84 -- # : 0 00:03:30.743 20:34:55 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:30.743 20:34:55 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:30.743 20:34:55 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:30.743 20:34:55 -- setup/hugepages.sh@160 -- # setup output 00:03:30.743 20:34:55 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.743 20:34:55 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.046 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:34.046 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:34.046 20:34:58 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:34.046 20:34:58 -- setup/hugepages.sh@89 -- # local node 00:03:34.046 20:34:58 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.046 20:34:58 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.046 20:34:58 -- setup/hugepages.sh@92 -- # local surp 00:03:34.046 20:34:58 -- setup/hugepages.sh@93 -- # local resv 00:03:34.046 20:34:58 -- setup/hugepages.sh@94 -- # local anon 00:03:34.046 20:34:58 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.046 20:34:58 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.046 20:34:58 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.046 20:34:58 -- setup/common.sh@18 -- # local node= 00:03:34.046 20:34:58 -- setup/common.sh@19 -- # local var val 00:03:34.046 20:34:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.046 20:34:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.046 20:34:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.046 20:34:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.046 20:34:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.046 20:34:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104442516 kB' 'MemAvailable: 108743396 kB' 'Buffers: 8520 kB' 'Cached: 14945364 kB' 'SwapCached: 0 kB' 'Active: 11918896 kB' 'Inactive: 3674916 kB' 'Active(anon): 10791384 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642660 kB' 'Mapped: 201864 kB' 'Shmem: 10151456 kB' 'KReclaimable: 539396 kB' 'Slab: 1315560 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 776164 kB' 'KernelStack: 27616 kB' 'PageTables: 9512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12276264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235804 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.046 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.046 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.047 20:34:58 -- setup/common.sh@33 -- # echo 0 00:03:34.047 20:34:58 -- setup/common.sh@33 -- # return 0 00:03:34.047 20:34:58 -- setup/hugepages.sh@97 -- # anon=0 00:03:34.047 20:34:58 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.047 20:34:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.047 20:34:58 -- setup/common.sh@18 -- # local node= 00:03:34.047 20:34:58 -- setup/common.sh@19 -- # local var val 00:03:34.047 20:34:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.047 20:34:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.047 20:34:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.047 20:34:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.047 20:34:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.047 20:34:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104442016 kB' 'MemAvailable: 108742896 kB' 'Buffers: 8520 kB' 'Cached: 14945364 kB' 'SwapCached: 0 kB' 'Active: 11919632 kB' 'Inactive: 3674916 kB' 'Active(anon): 10792120 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643428 kB' 'Mapped: 201864 kB' 'Shmem: 10151456 kB' 'KReclaimable: 539396 kB' 'Slab: 1315556 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 776160 kB' 'KernelStack: 27632 kB' 'PageTables: 9552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12276276 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235804 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.047 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.047 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.048 20:34:58 -- setup/common.sh@33 -- # echo 0 00:03:34.048 20:34:58 -- setup/common.sh@33 -- # return 0 00:03:34.048 20:34:58 -- setup/hugepages.sh@99 -- # surp=0 00:03:34.048 20:34:58 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.048 20:34:58 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.048 20:34:58 -- setup/common.sh@18 -- # local node= 00:03:34.048 20:34:58 -- setup/common.sh@19 -- # local var val 00:03:34.048 20:34:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.048 20:34:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.048 20:34:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.048 20:34:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.048 20:34:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.048 20:34:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104442696 kB' 'MemAvailable: 108743576 kB' 'Buffers: 8520 kB' 'Cached: 14945364 kB' 'SwapCached: 0 kB' 'Active: 11919496 kB' 'Inactive: 3674916 kB' 'Active(anon): 10791984 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643052 kB' 'Mapped: 201728 kB' 'Shmem: 10151456 kB' 'KReclaimable: 539396 kB' 'Slab: 1315556 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 776160 kB' 'KernelStack: 27616 kB' 'PageTables: 9472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12276292 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235788 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.048 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.048 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.048 20:34:58 -- setup/common.sh@33 -- # echo 0 00:03:34.048 20:34:58 -- setup/common.sh@33 -- # return 0 00:03:34.048 20:34:58 -- setup/hugepages.sh@100 -- # resv=0 00:03:34.048 20:34:58 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:34.048 nr_hugepages=1025 00:03:34.048 20:34:58 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.048 resv_hugepages=0 00:03:34.048 20:34:58 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.048 surplus_hugepages=0 00:03:34.048 20:34:58 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.048 anon_hugepages=0 00:03:34.048 20:34:58 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:34.048 20:34:58 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:34.048 20:34:58 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.048 20:34:58 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.048 20:34:58 -- setup/common.sh@18 -- # local node= 00:03:34.048 20:34:58 -- setup/common.sh@19 -- # local var val 00:03:34.048 20:34:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.048 20:34:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.048 20:34:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.048 20:34:58 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.048 20:34:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.048 20:34:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104443784 kB' 'MemAvailable: 108744664 kB' 'Buffers: 8520 kB' 'Cached: 14945388 kB' 'SwapCached: 0 kB' 'Active: 11917904 kB' 'Inactive: 3674916 kB' 'Active(anon): 10790392 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642108 kB' 'Mapped: 201728 kB' 'Shmem: 10151480 kB' 'KReclaimable: 539396 kB' 'Slab: 1315500 kB' 'SReclaimable: 539396 kB' 'SUnreclaim: 776104 kB' 'KernelStack: 27520 kB' 'PageTables: 9216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12276304 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235788 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.049 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.049 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.049 20:34:58 -- setup/common.sh@33 -- # echo 1025 00:03:34.049 20:34:58 -- setup/common.sh@33 -- # return 0 00:03:34.049 20:34:58 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:34.049 20:34:58 -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.049 20:34:58 -- setup/hugepages.sh@27 -- # local node 00:03:34.049 20:34:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.049 20:34:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:34.049 20:34:58 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.049 20:34:58 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:34.049 20:34:58 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.049 20:34:58 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.049 20:34:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.049 20:34:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.049 20:34:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.049 20:34:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.049 20:34:58 -- setup/common.sh@18 -- # local node=0 00:03:34.050 20:34:58 -- setup/common.sh@19 -- # local var val 00:03:34.050 20:34:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.050 20:34:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.050 20:34:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.050 20:34:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.050 20:34:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.050 20:34:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 55825336 kB' 'MemUsed: 9833688 kB' 'SwapCached: 0 kB' 'Active: 6008104 kB' 'Inactive: 190144 kB' 'Active(anon): 5419132 kB' 'Inactive(anon): 0 kB' 'Active(file): 588972 kB' 'Inactive(file): 190144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5731720 kB' 'Mapped: 160620 kB' 'AnonPages: 469724 kB' 'Shmem: 4952604 kB' 'KernelStack: 16040 kB' 'PageTables: 5824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320884 kB' 'Slab: 732840 kB' 'SReclaimable: 320884 kB' 'SUnreclaim: 411956 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@33 -- # echo 0 00:03:34.050 20:34:58 -- setup/common.sh@33 -- # return 0 00:03:34.050 20:34:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.050 20:34:58 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.050 20:34:58 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.050 20:34:58 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:34.050 20:34:58 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.050 20:34:58 -- setup/common.sh@18 -- # local node=1 00:03:34.050 20:34:58 -- setup/common.sh@19 -- # local var val 00:03:34.050 20:34:58 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.050 20:34:58 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.050 20:34:58 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:34.050 20:34:58 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:34.050 20:34:58 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.050 20:34:58 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679816 kB' 'MemFree: 48619448 kB' 'MemUsed: 12060368 kB' 'SwapCached: 0 kB' 'Active: 5910104 kB' 'Inactive: 3484772 kB' 'Active(anon): 5371564 kB' 'Inactive(anon): 0 kB' 'Active(file): 538540 kB' 'Inactive(file): 3484772 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9222208 kB' 'Mapped: 41108 kB' 'AnonPages: 172696 kB' 'Shmem: 5198896 kB' 'KernelStack: 11464 kB' 'PageTables: 3348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 218512 kB' 'Slab: 582660 kB' 'SReclaimable: 218512 kB' 'SUnreclaim: 364148 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.050 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.050 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # continue 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.051 20:34:58 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.051 20:34:58 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.051 20:34:58 -- setup/common.sh@33 -- # echo 0 00:03:34.051 20:34:58 -- setup/common.sh@33 -- # return 0 00:03:34.051 20:34:58 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.051 20:34:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.051 20:34:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.051 20:34:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.051 20:34:58 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:34.051 node0=512 expecting 513 00:03:34.051 20:34:58 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.051 20:34:58 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.051 20:34:58 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.051 20:34:58 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:34.051 node1=513 expecting 512 00:03:34.051 20:34:58 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:34.051 00:03:34.051 real 0m3.293s 00:03:34.051 user 0m1.138s 00:03:34.051 sys 0m2.108s 00:03:34.051 20:34:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:34.051 20:34:58 -- common/autotest_common.sh@10 -- # set +x 00:03:34.051 ************************************ 00:03:34.051 END TEST odd_alloc 00:03:34.051 ************************************ 00:03:34.051 20:34:58 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:34.051 20:34:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:34.051 20:34:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:34.051 20:34:58 -- common/autotest_common.sh@10 -- # set +x 00:03:34.311 ************************************ 00:03:34.312 START TEST custom_alloc 00:03:34.312 ************************************ 00:03:34.312 20:34:58 -- common/autotest_common.sh@1111 -- # custom_alloc 00:03:34.312 20:34:58 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:34.312 20:34:58 -- setup/hugepages.sh@169 -- # local node 00:03:34.312 20:34:58 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:34.312 20:34:58 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:34.312 20:34:58 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:34.312 20:34:58 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:34.312 20:34:58 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:34.312 20:34:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:34.312 20:34:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.312 20:34:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.312 20:34:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.312 20:34:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:34.312 20:34:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.312 20:34:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.312 20:34:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.312 20:34:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:34.312 20:34:58 -- setup/hugepages.sh@83 -- # : 256 00:03:34.312 20:34:58 -- setup/hugepages.sh@84 -- # : 1 00:03:34.312 20:34:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:34.312 20:34:58 -- setup/hugepages.sh@83 -- # : 0 00:03:34.312 20:34:58 -- setup/hugepages.sh@84 -- # : 0 00:03:34.312 20:34:58 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:34.312 20:34:58 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:34.312 20:34:58 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:34.312 20:34:58 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:34.312 20:34:58 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:34.312 20:34:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.312 20:34:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.312 20:34:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.312 20:34:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.312 20:34:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.312 20:34:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.312 20:34:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:34.312 20:34:58 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:34.312 20:34:58 -- setup/hugepages.sh@78 -- # return 0 00:03:34.312 20:34:58 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:34.312 20:34:58 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:34.312 20:34:58 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:34.312 20:34:58 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:34.312 20:34:58 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:34.312 20:34:58 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:34.312 20:34:58 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:34.312 20:34:58 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.312 20:34:58 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:34.312 20:34:58 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.312 20:34:58 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.312 20:34:58 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.312 20:34:58 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:34.312 20:34:58 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:34.312 20:34:58 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:34.312 20:34:58 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:34.312 20:34:58 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:34.312 20:34:58 -- setup/hugepages.sh@78 -- # return 0 00:03:34.312 20:34:58 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:34.312 20:34:58 -- setup/hugepages.sh@187 -- # setup output 00:03:34.312 20:34:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.312 20:34:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:37.619 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:37.619 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:37.619 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:37.888 20:35:02 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:37.888 20:35:02 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:37.888 20:35:02 -- setup/hugepages.sh@89 -- # local node 00:03:37.888 20:35:02 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.888 20:35:02 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.888 20:35:02 -- setup/hugepages.sh@92 -- # local surp 00:03:37.888 20:35:02 -- setup/hugepages.sh@93 -- # local resv 00:03:37.888 20:35:02 -- setup/hugepages.sh@94 -- # local anon 00:03:37.888 20:35:02 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.888 20:35:02 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.888 20:35:02 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.888 20:35:02 -- setup/common.sh@18 -- # local node= 00:03:37.888 20:35:02 -- setup/common.sh@19 -- # local var val 00:03:37.888 20:35:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.888 20:35:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.888 20:35:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.888 20:35:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.888 20:35:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.888 20:35:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.888 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.888 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.888 20:35:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 103403676 kB' 'MemAvailable: 107704524 kB' 'Buffers: 8520 kB' 'Cached: 14945508 kB' 'SwapCached: 0 kB' 'Active: 11918564 kB' 'Inactive: 3674916 kB' 'Active(anon): 10791052 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642332 kB' 'Mapped: 201920 kB' 'Shmem: 10151600 kB' 'KReclaimable: 539364 kB' 'Slab: 1315664 kB' 'SReclaimable: 539364 kB' 'SUnreclaim: 776300 kB' 'KernelStack: 27504 kB' 'PageTables: 9184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12276740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235932 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:37.888 20:35:02 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.888 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.888 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.888 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.888 20:35:02 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.888 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.888 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.888 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.888 20:35:02 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.888 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.889 20:35:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.889 20:35:02 -- setup/common.sh@33 -- # echo 0 00:03:37.889 20:35:02 -- setup/common.sh@33 -- # return 0 00:03:37.889 20:35:02 -- setup/hugepages.sh@97 -- # anon=0 00:03:37.889 20:35:02 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.889 20:35:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.889 20:35:02 -- setup/common.sh@18 -- # local node= 00:03:37.889 20:35:02 -- setup/common.sh@19 -- # local var val 00:03:37.889 20:35:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.889 20:35:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.889 20:35:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.889 20:35:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.889 20:35:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.889 20:35:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.889 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 103403248 kB' 'MemAvailable: 107704096 kB' 'Buffers: 8520 kB' 'Cached: 14945512 kB' 'SwapCached: 0 kB' 'Active: 11918652 kB' 'Inactive: 3674916 kB' 'Active(anon): 10791140 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 642416 kB' 'Mapped: 202372 kB' 'Shmem: 10151604 kB' 'KReclaimable: 539364 kB' 'Slab: 1315632 kB' 'SReclaimable: 539364 kB' 'SUnreclaim: 776268 kB' 'KernelStack: 27520 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12277844 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235932 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.890 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.890 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.891 20:35:02 -- setup/common.sh@33 -- # echo 0 00:03:37.891 20:35:02 -- setup/common.sh@33 -- # return 0 00:03:37.891 20:35:02 -- setup/hugepages.sh@99 -- # surp=0 00:03:37.891 20:35:02 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.891 20:35:02 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.891 20:35:02 -- setup/common.sh@18 -- # local node= 00:03:37.891 20:35:02 -- setup/common.sh@19 -- # local var val 00:03:37.891 20:35:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.891 20:35:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.891 20:35:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.891 20:35:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.891 20:35:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.891 20:35:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 103400756 kB' 'MemAvailable: 107701604 kB' 'Buffers: 8520 kB' 'Cached: 14945520 kB' 'SwapCached: 0 kB' 'Active: 11921824 kB' 'Inactive: 3674916 kB' 'Active(anon): 10794312 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 646040 kB' 'Mapped: 202264 kB' 'Shmem: 10151612 kB' 'KReclaimable: 539364 kB' 'Slab: 1315612 kB' 'SReclaimable: 539364 kB' 'SUnreclaim: 776248 kB' 'KernelStack: 27520 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12281424 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235884 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.891 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.891 20:35:02 -- setup/common.sh@32 -- # continue 00:03:37.892 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.155 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.155 20:35:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.156 20:35:02 -- setup/common.sh@33 -- # echo 0 00:03:38.156 20:35:02 -- setup/common.sh@33 -- # return 0 00:03:38.156 20:35:02 -- setup/hugepages.sh@100 -- # resv=0 00:03:38.156 20:35:02 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:38.156 nr_hugepages=1536 00:03:38.156 20:35:02 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.156 resv_hugepages=0 00:03:38.156 20:35:02 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.156 surplus_hugepages=0 00:03:38.156 20:35:02 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.156 anon_hugepages=0 00:03:38.156 20:35:02 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:38.156 20:35:02 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:38.156 20:35:02 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.156 20:35:02 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.156 20:35:02 -- setup/common.sh@18 -- # local node= 00:03:38.156 20:35:02 -- setup/common.sh@19 -- # local var val 00:03:38.156 20:35:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.156 20:35:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.156 20:35:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.156 20:35:02 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.156 20:35:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.156 20:35:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 103397012 kB' 'MemAvailable: 107697860 kB' 'Buffers: 8520 kB' 'Cached: 14945548 kB' 'SwapCached: 0 kB' 'Active: 11917520 kB' 'Inactive: 3674916 kB' 'Active(anon): 10790008 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641704 kB' 'Mapped: 202028 kB' 'Shmem: 10151640 kB' 'KReclaimable: 539364 kB' 'Slab: 1315612 kB' 'SReclaimable: 539364 kB' 'SUnreclaim: 776248 kB' 'KernelStack: 27504 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12276780 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235900 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.156 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.156 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.157 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.157 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.157 20:35:02 -- setup/common.sh@33 -- # echo 1536 00:03:38.157 20:35:02 -- setup/common.sh@33 -- # return 0 00:03:38.157 20:35:02 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:38.157 20:35:02 -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.157 20:35:02 -- setup/hugepages.sh@27 -- # local node 00:03:38.157 20:35:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.157 20:35:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.157 20:35:02 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.157 20:35:02 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:38.158 20:35:02 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.158 20:35:02 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.158 20:35:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.158 20:35:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.158 20:35:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.158 20:35:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.158 20:35:02 -- setup/common.sh@18 -- # local node=0 00:03:38.158 20:35:02 -- setup/common.sh@19 -- # local var val 00:03:38.158 20:35:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.158 20:35:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.158 20:35:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.158 20:35:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.158 20:35:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.158 20:35:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 55837572 kB' 'MemUsed: 9821452 kB' 'SwapCached: 0 kB' 'Active: 6008920 kB' 'Inactive: 190144 kB' 'Active(anon): 5419948 kB' 'Inactive(anon): 0 kB' 'Active(file): 588972 kB' 'Inactive(file): 190144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5731848 kB' 'Mapped: 160648 kB' 'AnonPages: 470524 kB' 'Shmem: 4952732 kB' 'KernelStack: 16072 kB' 'PageTables: 5928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320852 kB' 'Slab: 733220 kB' 'SReclaimable: 320852 kB' 'SUnreclaim: 412368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.158 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.158 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@33 -- # echo 0 00:03:38.159 20:35:02 -- setup/common.sh@33 -- # return 0 00:03:38.159 20:35:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.159 20:35:02 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.159 20:35:02 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.159 20:35:02 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:38.159 20:35:02 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.159 20:35:02 -- setup/common.sh@18 -- # local node=1 00:03:38.159 20:35:02 -- setup/common.sh@19 -- # local var val 00:03:38.159 20:35:02 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.159 20:35:02 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.159 20:35:02 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:38.159 20:35:02 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:38.159 20:35:02 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.159 20:35:02 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679816 kB' 'MemFree: 47559472 kB' 'MemUsed: 13120344 kB' 'SwapCached: 0 kB' 'Active: 5908824 kB' 'Inactive: 3484772 kB' 'Active(anon): 5370284 kB' 'Inactive(anon): 0 kB' 'Active(file): 538540 kB' 'Inactive(file): 3484772 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9222224 kB' 'Mapped: 41112 kB' 'AnonPages: 171496 kB' 'Shmem: 5198912 kB' 'KernelStack: 11448 kB' 'PageTables: 3304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 218512 kB' 'Slab: 582392 kB' 'SReclaimable: 218512 kB' 'SUnreclaim: 363880 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.159 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.159 20:35:02 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.160 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.160 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.160 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.160 20:35:02 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.160 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.160 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.160 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.160 20:35:02 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.160 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.160 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.160 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.160 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.160 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.160 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.160 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.160 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.160 20:35:02 -- setup/common.sh@32 -- # continue 00:03:38.160 20:35:02 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.160 20:35:02 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.160 20:35:02 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.160 20:35:02 -- setup/common.sh@33 -- # echo 0 00:03:38.160 20:35:02 -- setup/common.sh@33 -- # return 0 00:03:38.160 20:35:02 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.160 20:35:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.160 20:35:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.160 20:35:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.160 20:35:02 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:38.160 node0=512 expecting 512 00:03:38.160 20:35:02 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.160 20:35:02 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.160 20:35:02 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.160 20:35:02 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:38.160 node1=1024 expecting 1024 00:03:38.160 20:35:02 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:38.160 00:03:38.160 real 0m3.838s 00:03:38.160 user 0m1.493s 00:03:38.160 sys 0m2.364s 00:03:38.160 20:35:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:38.160 20:35:02 -- common/autotest_common.sh@10 -- # set +x 00:03:38.160 ************************************ 00:03:38.160 END TEST custom_alloc 00:03:38.160 ************************************ 00:03:38.160 20:35:02 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:38.160 20:35:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:38.160 20:35:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:38.160 20:35:02 -- common/autotest_common.sh@10 -- # set +x 00:03:38.421 ************************************ 00:03:38.421 START TEST no_shrink_alloc 00:03:38.421 ************************************ 00:03:38.421 20:35:02 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:03:38.421 20:35:02 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:38.421 20:35:02 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:38.421 20:35:02 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:38.421 20:35:02 -- setup/hugepages.sh@51 -- # shift 00:03:38.421 20:35:02 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:38.421 20:35:02 -- setup/hugepages.sh@52 -- # local node_ids 00:03:38.421 20:35:02 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.421 20:35:02 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:38.421 20:35:02 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:38.421 20:35:02 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:38.421 20:35:02 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.421 20:35:02 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.421 20:35:02 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.421 20:35:02 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.421 20:35:02 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.421 20:35:02 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:38.421 20:35:02 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:38.421 20:35:02 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:38.421 20:35:02 -- setup/hugepages.sh@73 -- # return 0 00:03:38.421 20:35:02 -- setup/hugepages.sh@198 -- # setup output 00:03:38.421 20:35:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.421 20:35:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:41.723 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.723 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.723 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.723 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.723 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.724 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.724 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.724 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.724 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:41.724 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:41.724 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:41.724 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:41.724 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:41.724 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:41.724 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:41.724 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:41.724 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:41.986 20:35:06 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:41.986 20:35:06 -- setup/hugepages.sh@89 -- # local node 00:03:41.986 20:35:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:41.986 20:35:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:41.986 20:35:06 -- setup/hugepages.sh@92 -- # local surp 00:03:41.986 20:35:06 -- setup/hugepages.sh@93 -- # local resv 00:03:41.986 20:35:06 -- setup/hugepages.sh@94 -- # local anon 00:03:41.986 20:35:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:41.986 20:35:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:41.986 20:35:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:41.986 20:35:06 -- setup/common.sh@18 -- # local node= 00:03:41.986 20:35:06 -- setup/common.sh@19 -- # local var val 00:03:41.986 20:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.986 20:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.986 20:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.986 20:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.986 20:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.986 20:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.986 20:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104446072 kB' 'MemAvailable: 108746920 kB' 'Buffers: 8520 kB' 'Cached: 14945664 kB' 'SwapCached: 0 kB' 'Active: 11919632 kB' 'Inactive: 3674916 kB' 'Active(anon): 10792120 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643156 kB' 'Mapped: 201940 kB' 'Shmem: 10151756 kB' 'KReclaimable: 539364 kB' 'Slab: 1315512 kB' 'SReclaimable: 539364 kB' 'SUnreclaim: 776148 kB' 'KernelStack: 27520 kB' 'PageTables: 9232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12281300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235868 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.986 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.986 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:41.987 20:35:06 -- setup/common.sh@33 -- # echo 0 00:03:41.987 20:35:06 -- setup/common.sh@33 -- # return 0 00:03:41.987 20:35:06 -- setup/hugepages.sh@97 -- # anon=0 00:03:41.987 20:35:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:41.987 20:35:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:41.987 20:35:06 -- setup/common.sh@18 -- # local node= 00:03:41.987 20:35:06 -- setup/common.sh@19 -- # local var val 00:03:41.987 20:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.987 20:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.987 20:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.987 20:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.987 20:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.987 20:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104447480 kB' 'MemAvailable: 108748328 kB' 'Buffers: 8520 kB' 'Cached: 14945668 kB' 'SwapCached: 0 kB' 'Active: 11919512 kB' 'Inactive: 3674916 kB' 'Active(anon): 10792000 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643592 kB' 'Mapped: 201808 kB' 'Shmem: 10151760 kB' 'KReclaimable: 539364 kB' 'Slab: 1315488 kB' 'SReclaimable: 539364 kB' 'SUnreclaim: 776124 kB' 'KernelStack: 27472 kB' 'PageTables: 9140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12280048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235836 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.987 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.987 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:41.988 20:35:06 -- setup/common.sh@33 -- # echo 0 00:03:41.988 20:35:06 -- setup/common.sh@33 -- # return 0 00:03:41.988 20:35:06 -- setup/hugepages.sh@99 -- # surp=0 00:03:41.988 20:35:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:41.988 20:35:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:41.988 20:35:06 -- setup/common.sh@18 -- # local node= 00:03:41.988 20:35:06 -- setup/common.sh@19 -- # local var val 00:03:41.988 20:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:41.988 20:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:41.988 20:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:41.988 20:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:41.988 20:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:41.988 20:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104449020 kB' 'MemAvailable: 108749868 kB' 'Buffers: 8520 kB' 'Cached: 14945680 kB' 'SwapCached: 0 kB' 'Active: 11919496 kB' 'Inactive: 3674916 kB' 'Active(anon): 10791984 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643552 kB' 'Mapped: 201808 kB' 'Shmem: 10151772 kB' 'KReclaimable: 539364 kB' 'Slab: 1315556 kB' 'SReclaimable: 539364 kB' 'SUnreclaim: 776192 kB' 'KernelStack: 27552 kB' 'PageTables: 9336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12279696 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235852 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.988 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.988 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:41.989 20:35:06 -- setup/common.sh@32 -- # continue 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:41.989 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.255 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.255 20:35:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.255 20:35:06 -- setup/common.sh@33 -- # echo 0 00:03:42.255 20:35:06 -- setup/common.sh@33 -- # return 0 00:03:42.255 20:35:06 -- setup/hugepages.sh@100 -- # resv=0 00:03:42.255 20:35:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.255 nr_hugepages=1024 00:03:42.255 20:35:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.255 resv_hugepages=0 00:03:42.255 20:35:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.255 surplus_hugepages=0 00:03:42.255 20:35:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.255 anon_hugepages=0 00:03:42.255 20:35:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.255 20:35:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.255 20:35:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.255 20:35:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.255 20:35:06 -- setup/common.sh@18 -- # local node= 00:03:42.255 20:35:06 -- setup/common.sh@19 -- # local var val 00:03:42.255 20:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.255 20:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.255 20:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.255 20:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.255 20:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.255 20:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.256 20:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104450596 kB' 'MemAvailable: 108751444 kB' 'Buffers: 8520 kB' 'Cached: 14945692 kB' 'SwapCached: 0 kB' 'Active: 11919368 kB' 'Inactive: 3674916 kB' 'Active(anon): 10791856 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 643472 kB' 'Mapped: 201808 kB' 'Shmem: 10151784 kB' 'KReclaimable: 539364 kB' 'Slab: 1315548 kB' 'SReclaimable: 539364 kB' 'SUnreclaim: 776184 kB' 'KernelStack: 27568 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12279708 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235900 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.256 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.256 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.257 20:35:06 -- setup/common.sh@33 -- # echo 1024 00:03:42.257 20:35:06 -- setup/common.sh@33 -- # return 0 00:03:42.257 20:35:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.257 20:35:06 -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.257 20:35:06 -- setup/hugepages.sh@27 -- # local node 00:03:42.257 20:35:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.257 20:35:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:42.257 20:35:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.257 20:35:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:42.257 20:35:06 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.257 20:35:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.257 20:35:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.257 20:35:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.257 20:35:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.257 20:35:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.257 20:35:06 -- setup/common.sh@18 -- # local node=0 00:03:42.257 20:35:06 -- setup/common.sh@19 -- # local var val 00:03:42.257 20:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.257 20:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.257 20:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.257 20:35:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.257 20:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.257 20:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 54796880 kB' 'MemUsed: 10862144 kB' 'SwapCached: 0 kB' 'Active: 6010676 kB' 'Inactive: 190144 kB' 'Active(anon): 5421704 kB' 'Inactive(anon): 0 kB' 'Active(file): 588972 kB' 'Inactive(file): 190144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5731936 kB' 'Mapped: 160684 kB' 'AnonPages: 472060 kB' 'Shmem: 4952820 kB' 'KernelStack: 16296 kB' 'PageTables: 6276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320852 kB' 'Slab: 732932 kB' 'SReclaimable: 320852 kB' 'SUnreclaim: 412080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.257 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.257 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # continue 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.258 20:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.258 20:35:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.258 20:35:06 -- setup/common.sh@33 -- # echo 0 00:03:42.258 20:35:06 -- setup/common.sh@33 -- # return 0 00:03:42.258 20:35:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.258 20:35:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.258 20:35:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.258 20:35:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.258 20:35:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:42.258 node0=1024 expecting 1024 00:03:42.258 20:35:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:42.258 20:35:06 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:42.258 20:35:06 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:42.258 20:35:06 -- setup/hugepages.sh@202 -- # setup output 00:03:42.258 20:35:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.258 20:35:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:45.567 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:45.567 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:45.567 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:45.567 20:35:10 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:45.567 20:35:10 -- setup/hugepages.sh@89 -- # local node 00:03:45.567 20:35:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.567 20:35:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.567 20:35:10 -- setup/hugepages.sh@92 -- # local surp 00:03:45.567 20:35:10 -- setup/hugepages.sh@93 -- # local resv 00:03:45.567 20:35:10 -- setup/hugepages.sh@94 -- # local anon 00:03:45.567 20:35:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.567 20:35:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.567 20:35:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.567 20:35:10 -- setup/common.sh@18 -- # local node= 00:03:45.567 20:35:10 -- setup/common.sh@19 -- # local var val 00:03:45.567 20:35:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.567 20:35:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.567 20:35:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.567 20:35:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.567 20:35:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.567 20:35:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.567 20:35:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104470496 kB' 'MemAvailable: 108771344 kB' 'Buffers: 8520 kB' 'Cached: 14945768 kB' 'SwapCached: 0 kB' 'Active: 11920948 kB' 'Inactive: 3674916 kB' 'Active(anon): 10793436 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 645168 kB' 'Mapped: 201892 kB' 'Shmem: 10151860 kB' 'KReclaimable: 539364 kB' 'Slab: 1315996 kB' 'SReclaimable: 539364 kB' 'SUnreclaim: 776632 kB' 'KernelStack: 27440 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12278632 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235836 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.567 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.567 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.568 20:35:10 -- setup/common.sh@33 -- # echo 0 00:03:45.568 20:35:10 -- setup/common.sh@33 -- # return 0 00:03:45.568 20:35:10 -- setup/hugepages.sh@97 -- # anon=0 00:03:45.568 20:35:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.568 20:35:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.568 20:35:10 -- setup/common.sh@18 -- # local node= 00:03:45.568 20:35:10 -- setup/common.sh@19 -- # local var val 00:03:45.568 20:35:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.568 20:35:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.568 20:35:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.568 20:35:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.568 20:35:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.568 20:35:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104469280 kB' 'MemAvailable: 108770128 kB' 'Buffers: 8520 kB' 'Cached: 14945772 kB' 'SwapCached: 0 kB' 'Active: 11923244 kB' 'Inactive: 3674916 kB' 'Active(anon): 10795732 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 647764 kB' 'Mapped: 201852 kB' 'Shmem: 10151864 kB' 'KReclaimable: 539364 kB' 'Slab: 1316012 kB' 'SReclaimable: 539364 kB' 'SUnreclaim: 776648 kB' 'KernelStack: 27440 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12280036 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235772 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.568 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.568 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.569 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.569 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.570 20:35:10 -- setup/common.sh@33 -- # echo 0 00:03:45.570 20:35:10 -- setup/common.sh@33 -- # return 0 00:03:45.570 20:35:10 -- setup/hugepages.sh@99 -- # surp=0 00:03:45.570 20:35:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.570 20:35:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.570 20:35:10 -- setup/common.sh@18 -- # local node= 00:03:45.570 20:35:10 -- setup/common.sh@19 -- # local var val 00:03:45.570 20:35:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.570 20:35:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.570 20:35:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.570 20:35:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.570 20:35:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.570 20:35:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104474584 kB' 'MemAvailable: 108775432 kB' 'Buffers: 8520 kB' 'Cached: 14945784 kB' 'SwapCached: 0 kB' 'Active: 11924432 kB' 'Inactive: 3674916 kB' 'Active(anon): 10796920 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 649008 kB' 'Mapped: 201852 kB' 'Shmem: 10151876 kB' 'KReclaimable: 539364 kB' 'Slab: 1316012 kB' 'SReclaimable: 539364 kB' 'SUnreclaim: 776648 kB' 'KernelStack: 27488 kB' 'PageTables: 9240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12282604 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235836 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.570 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.570 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.571 20:35:10 -- setup/common.sh@33 -- # echo 0 00:03:45.571 20:35:10 -- setup/common.sh@33 -- # return 0 00:03:45.571 20:35:10 -- setup/hugepages.sh@100 -- # resv=0 00:03:45.571 20:35:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.571 nr_hugepages=1024 00:03:45.571 20:35:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.571 resv_hugepages=0 00:03:45.571 20:35:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.571 surplus_hugepages=0 00:03:45.571 20:35:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.571 anon_hugepages=0 00:03:45.571 20:35:10 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.571 20:35:10 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.571 20:35:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.571 20:35:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.571 20:35:10 -- setup/common.sh@18 -- # local node= 00:03:45.571 20:35:10 -- setup/common.sh@19 -- # local var val 00:03:45.571 20:35:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.571 20:35:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.571 20:35:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.571 20:35:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.571 20:35:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.571 20:35:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338840 kB' 'MemFree: 104475180 kB' 'MemAvailable: 108776028 kB' 'Buffers: 8520 kB' 'Cached: 14945812 kB' 'SwapCached: 0 kB' 'Active: 11927064 kB' 'Inactive: 3674916 kB' 'Active(anon): 10799552 kB' 'Inactive(anon): 0 kB' 'Active(file): 1127512 kB' 'Inactive(file): 3674916 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 651564 kB' 'Mapped: 202332 kB' 'Shmem: 10151904 kB' 'KReclaimable: 539364 kB' 'Slab: 1316008 kB' 'SReclaimable: 539364 kB' 'SUnreclaim: 776644 kB' 'KernelStack: 27504 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12285936 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235788 kB' 'VmallocChunk: 0 kB' 'Percpu: 160704 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4713860 kB' 'DirectMap2M: 30617600 kB' 'DirectMap1G: 100663296 kB' 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.571 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.571 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.572 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.572 20:35:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.573 20:35:10 -- setup/common.sh@33 -- # echo 1024 00:03:45.573 20:35:10 -- setup/common.sh@33 -- # return 0 00:03:45.573 20:35:10 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.573 20:35:10 -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.573 20:35:10 -- setup/hugepages.sh@27 -- # local node 00:03:45.573 20:35:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.573 20:35:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.573 20:35:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.573 20:35:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.573 20:35:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.573 20:35:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.573 20:35:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.573 20:35:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.573 20:35:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.573 20:35:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.573 20:35:10 -- setup/common.sh@18 -- # local node=0 00:03:45.573 20:35:10 -- setup/common.sh@19 -- # local var val 00:03:45.573 20:35:10 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.573 20:35:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.573 20:35:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.573 20:35:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.573 20:35:10 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.573 20:35:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659024 kB' 'MemFree: 54795004 kB' 'MemUsed: 10864020 kB' 'SwapCached: 0 kB' 'Active: 6011972 kB' 'Inactive: 190144 kB' 'Active(anon): 5423000 kB' 'Inactive(anon): 0 kB' 'Active(file): 588972 kB' 'Inactive(file): 190144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5731956 kB' 'Mapped: 160696 kB' 'AnonPages: 473776 kB' 'Shmem: 4952840 kB' 'KernelStack: 16008 kB' 'PageTables: 5752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 320852 kB' 'Slab: 733424 kB' 'SReclaimable: 320852 kB' 'SUnreclaim: 412572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.573 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.573 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # continue 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.574 20:35:10 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.574 20:35:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.574 20:35:10 -- setup/common.sh@33 -- # echo 0 00:03:45.574 20:35:10 -- setup/common.sh@33 -- # return 0 00:03:45.574 20:35:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.574 20:35:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.574 20:35:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.574 20:35:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.574 20:35:10 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.574 node0=1024 expecting 1024 00:03:45.574 20:35:10 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.574 00:03:45.574 real 0m7.347s 00:03:45.574 user 0m2.727s 00:03:45.574 sys 0m4.647s 00:03:45.574 20:35:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:45.574 20:35:10 -- common/autotest_common.sh@10 -- # set +x 00:03:45.574 ************************************ 00:03:45.574 END TEST no_shrink_alloc 00:03:45.574 ************************************ 00:03:45.835 20:35:10 -- setup/hugepages.sh@217 -- # clear_hp 00:03:45.835 20:35:10 -- setup/hugepages.sh@37 -- # local node hp 00:03:45.835 20:35:10 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.835 20:35:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.835 20:35:10 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.835 20:35:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.835 20:35:10 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.835 20:35:10 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.835 20:35:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.835 20:35:10 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.835 20:35:10 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.835 20:35:10 -- setup/hugepages.sh@41 -- # echo 0 00:03:45.835 20:35:10 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.835 20:35:10 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.835 00:03:45.835 real 0m26.972s 00:03:45.835 user 0m9.947s 00:03:45.835 sys 0m16.966s 00:03:45.835 20:35:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:45.835 20:35:10 -- common/autotest_common.sh@10 -- # set +x 00:03:45.835 ************************************ 00:03:45.835 END TEST hugepages 00:03:45.835 ************************************ 00:03:45.836 20:35:10 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:45.836 20:35:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:45.836 20:35:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:45.836 20:35:10 -- common/autotest_common.sh@10 -- # set +x 00:03:45.836 ************************************ 00:03:45.836 START TEST driver 00:03:45.836 ************************************ 00:03:45.836 20:35:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:46.098 * Looking for test storage... 00:03:46.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:46.098 20:35:10 -- setup/driver.sh@68 -- # setup reset 00:03:46.098 20:35:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:46.098 20:35:10 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.385 20:35:15 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:51.385 20:35:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:51.385 20:35:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:51.385 20:35:15 -- common/autotest_common.sh@10 -- # set +x 00:03:51.385 ************************************ 00:03:51.385 START TEST guess_driver 00:03:51.385 ************************************ 00:03:51.385 20:35:15 -- common/autotest_common.sh@1111 -- # guess_driver 00:03:51.385 20:35:15 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:51.385 20:35:15 -- setup/driver.sh@47 -- # local fail=0 00:03:51.385 20:35:15 -- setup/driver.sh@49 -- # pick_driver 00:03:51.385 20:35:15 -- setup/driver.sh@36 -- # vfio 00:03:51.385 20:35:15 -- setup/driver.sh@21 -- # local iommu_grups 00:03:51.385 20:35:15 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:51.385 20:35:15 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:51.385 20:35:15 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:51.385 20:35:15 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:51.385 20:35:15 -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:03:51.385 20:35:15 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:51.385 20:35:15 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:51.385 20:35:15 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:51.385 20:35:15 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:51.385 20:35:15 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:51.385 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:51.385 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:51.385 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:51.385 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:51.385 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:51.385 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:51.385 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:51.385 20:35:15 -- setup/driver.sh@30 -- # return 0 00:03:51.385 20:35:15 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:51.385 20:35:15 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:51.385 20:35:15 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:51.385 20:35:15 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:51.385 Looking for driver=vfio-pci 00:03:51.385 20:35:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:51.385 20:35:15 -- setup/driver.sh@45 -- # setup output config 00:03:51.385 20:35:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.385 20:35:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:18 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:54.723 20:35:18 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:54.723 20:35:18 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:54.723 20:35:19 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:54.723 20:35:19 -- setup/driver.sh@65 -- # setup reset 00:03:54.723 20:35:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.723 20:35:19 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.010 00:04:00.010 real 0m9.030s 00:04:00.010 user 0m2.963s 00:04:00.010 sys 0m5.197s 00:04:00.010 20:35:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:00.010 20:35:24 -- common/autotest_common.sh@10 -- # set +x 00:04:00.010 ************************************ 00:04:00.010 END TEST guess_driver 00:04:00.010 ************************************ 00:04:00.010 00:04:00.010 real 0m13.991s 00:04:00.010 user 0m4.338s 00:04:00.010 sys 0m7.830s 00:04:00.010 20:35:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:00.010 20:35:24 -- common/autotest_common.sh@10 -- # set +x 00:04:00.010 ************************************ 00:04:00.010 END TEST driver 00:04:00.010 ************************************ 00:04:00.010 20:35:24 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:00.010 20:35:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:00.010 20:35:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:00.010 20:35:24 -- common/autotest_common.sh@10 -- # set +x 00:04:00.010 ************************************ 00:04:00.010 START TEST devices 00:04:00.010 ************************************ 00:04:00.010 20:35:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:00.271 * Looking for test storage... 00:04:00.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:00.271 20:35:24 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:00.271 20:35:24 -- setup/devices.sh@192 -- # setup reset 00:04:00.271 20:35:24 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:00.271 20:35:24 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:04.476 20:35:28 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:04.476 20:35:28 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:04.476 20:35:28 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:04.476 20:35:28 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:04.476 20:35:28 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:04.476 20:35:28 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:04.476 20:35:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:04.476 20:35:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:04.476 20:35:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:04.476 20:35:28 -- setup/devices.sh@196 -- # blocks=() 00:04:04.476 20:35:28 -- setup/devices.sh@196 -- # declare -a blocks 00:04:04.476 20:35:28 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:04.476 20:35:28 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:04.476 20:35:28 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:04.476 20:35:28 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:04.476 20:35:28 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:04.476 20:35:28 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:04.476 20:35:28 -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:04.476 20:35:28 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:04.476 20:35:28 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:04.476 20:35:28 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:04.476 20:35:28 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:04.476 No valid GPT data, bailing 00:04:04.476 20:35:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:04.476 20:35:28 -- scripts/common.sh@391 -- # pt= 00:04:04.476 20:35:28 -- scripts/common.sh@392 -- # return 1 00:04:04.476 20:35:28 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:04.476 20:35:28 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:04.476 20:35:28 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:04.476 20:35:28 -- setup/common.sh@80 -- # echo 1920383410176 00:04:04.476 20:35:28 -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:04.476 20:35:28 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:04.476 20:35:28 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:04.476 20:35:28 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:04.476 20:35:28 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:04.476 20:35:28 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:04.476 20:35:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:04.476 20:35:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:04.476 20:35:28 -- common/autotest_common.sh@10 -- # set +x 00:04:04.476 ************************************ 00:04:04.476 START TEST nvme_mount 00:04:04.476 ************************************ 00:04:04.476 20:35:29 -- common/autotest_common.sh@1111 -- # nvme_mount 00:04:04.476 20:35:29 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:04.476 20:35:29 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:04.476 20:35:29 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.476 20:35:29 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:04.476 20:35:29 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:04.476 20:35:29 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:04.476 20:35:29 -- setup/common.sh@40 -- # local part_no=1 00:04:04.476 20:35:29 -- setup/common.sh@41 -- # local size=1073741824 00:04:04.476 20:35:29 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:04.476 20:35:29 -- setup/common.sh@44 -- # parts=() 00:04:04.476 20:35:29 -- setup/common.sh@44 -- # local parts 00:04:04.476 20:35:29 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:04.476 20:35:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.476 20:35:29 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:04.476 20:35:29 -- setup/common.sh@46 -- # (( part++ )) 00:04:04.476 20:35:29 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:04.476 20:35:29 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:04.476 20:35:29 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:04.476 20:35:29 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:05.859 Creating new GPT entries in memory. 00:04:05.859 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:05.859 other utilities. 00:04:05.859 20:35:30 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:05.859 20:35:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:05.859 20:35:30 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:05.860 20:35:30 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:05.860 20:35:30 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:06.800 Creating new GPT entries in memory. 00:04:06.801 The operation has completed successfully. 00:04:06.801 20:35:31 -- setup/common.sh@57 -- # (( part++ )) 00:04:06.801 20:35:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.801 20:35:31 -- setup/common.sh@62 -- # wait 2549228 00:04:06.801 20:35:31 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.801 20:35:31 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:06.801 20:35:31 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.801 20:35:31 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:06.801 20:35:31 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:06.801 20:35:31 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.801 20:35:31 -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.801 20:35:31 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:06.801 20:35:31 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:06.801 20:35:31 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:06.801 20:35:31 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:06.801 20:35:31 -- setup/devices.sh@53 -- # local found=0 00:04:06.801 20:35:31 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:06.801 20:35:31 -- setup/devices.sh@56 -- # : 00:04:06.801 20:35:31 -- setup/devices.sh@59 -- # local pci status 00:04:06.801 20:35:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.801 20:35:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:06.801 20:35:31 -- setup/devices.sh@47 -- # setup output config 00:04:06.801 20:35:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.801 20:35:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.099 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.099 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.099 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.099 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.099 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.099 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.099 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.099 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:10.099 20:35:34 -- setup/devices.sh@63 -- # found=1 00:04:10.099 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.099 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.099 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.099 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.100 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.100 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.100 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.100 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.100 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.100 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.100 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.100 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.100 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.100 20:35:34 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:10.100 20:35:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.360 20:35:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:10.360 20:35:34 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:10.360 20:35:34 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.360 20:35:34 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:10.360 20:35:34 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.360 20:35:34 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:10.360 20:35:34 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.360 20:35:34 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.360 20:35:34 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:10.360 20:35:34 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:10.360 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:10.360 20:35:34 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:10.360 20:35:34 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:10.621 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:10.621 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:10.621 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:10.621 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:10.621 20:35:35 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:10.621 20:35:35 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:10.621 20:35:35 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.621 20:35:35 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:10.621 20:35:35 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:10.621 20:35:35 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.621 20:35:35 -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.621 20:35:35 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:10.621 20:35:35 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:10.621 20:35:35 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:10.621 20:35:35 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:10.621 20:35:35 -- setup/devices.sh@53 -- # local found=0 00:04:10.621 20:35:35 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:10.621 20:35:35 -- setup/devices.sh@56 -- # : 00:04:10.621 20:35:35 -- setup/devices.sh@59 -- # local pci status 00:04:10.621 20:35:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:10.621 20:35:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:10.621 20:35:35 -- setup/devices.sh@47 -- # setup output config 00:04:10.621 20:35:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.621 20:35:35 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:13.925 20:35:37 -- setup/devices.sh@63 -- # found=1 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:37 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:13.925 20:35:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:38 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.925 20:35:38 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:13.925 20:35:38 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.925 20:35:38 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:13.925 20:35:38 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:13.925 20:35:38 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.925 20:35:38 -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:13.925 20:35:38 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:13.925 20:35:38 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:13.925 20:35:38 -- setup/devices.sh@50 -- # local mount_point= 00:04:13.925 20:35:38 -- setup/devices.sh@51 -- # local test_file= 00:04:13.925 20:35:38 -- setup/devices.sh@53 -- # local found=0 00:04:13.925 20:35:38 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:13.925 20:35:38 -- setup/devices.sh@59 -- # local pci status 00:04:13.925 20:35:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.925 20:35:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:13.925 20:35:38 -- setup/devices.sh@47 -- # setup output config 00:04:13.925 20:35:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.925 20:35:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:17.223 20:35:41 -- setup/devices.sh@63 -- # found=1 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.223 20:35:41 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:17.223 20:35:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.483 20:35:42 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:17.483 20:35:42 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:17.483 20:35:42 -- setup/devices.sh@68 -- # return 0 00:04:17.483 20:35:42 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:17.483 20:35:42 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.483 20:35:42 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:17.483 20:35:42 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:17.484 20:35:42 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:17.484 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:17.484 00:04:17.484 real 0m12.980s 00:04:17.484 user 0m3.854s 00:04:17.484 sys 0m6.908s 00:04:17.484 20:35:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:17.484 20:35:42 -- common/autotest_common.sh@10 -- # set +x 00:04:17.484 ************************************ 00:04:17.484 END TEST nvme_mount 00:04:17.484 ************************************ 00:04:17.484 20:35:42 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:17.484 20:35:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.484 20:35:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.484 20:35:42 -- common/autotest_common.sh@10 -- # set +x 00:04:17.744 ************************************ 00:04:17.744 START TEST dm_mount 00:04:17.744 ************************************ 00:04:17.744 20:35:42 -- common/autotest_common.sh@1111 -- # dm_mount 00:04:17.744 20:35:42 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:17.744 20:35:42 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:17.744 20:35:42 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:17.744 20:35:42 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:17.744 20:35:42 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:17.744 20:35:42 -- setup/common.sh@40 -- # local part_no=2 00:04:17.744 20:35:42 -- setup/common.sh@41 -- # local size=1073741824 00:04:17.744 20:35:42 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:17.744 20:35:42 -- setup/common.sh@44 -- # parts=() 00:04:17.744 20:35:42 -- setup/common.sh@44 -- # local parts 00:04:17.744 20:35:42 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:17.744 20:35:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.744 20:35:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:17.744 20:35:42 -- setup/common.sh@46 -- # (( part++ )) 00:04:17.744 20:35:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.744 20:35:42 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:17.744 20:35:42 -- setup/common.sh@46 -- # (( part++ )) 00:04:17.744 20:35:42 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.744 20:35:42 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:17.744 20:35:42 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:17.744 20:35:42 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:18.692 Creating new GPT entries in memory. 00:04:18.692 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:18.692 other utilities. 00:04:18.693 20:35:43 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:18.693 20:35:43 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.693 20:35:43 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.693 20:35:43 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.693 20:35:43 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:20.074 Creating new GPT entries in memory. 00:04:20.074 The operation has completed successfully. 00:04:20.074 20:35:44 -- setup/common.sh@57 -- # (( part++ )) 00:04:20.074 20:35:44 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:20.074 20:35:44 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:20.074 20:35:44 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:20.074 20:35:44 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:21.016 The operation has completed successfully. 00:04:21.016 20:35:45 -- setup/common.sh@57 -- # (( part++ )) 00:04:21.016 20:35:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:21.016 20:35:45 -- setup/common.sh@62 -- # wait 2554419 00:04:21.016 20:35:45 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:21.016 20:35:45 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.016 20:35:45 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:21.016 20:35:45 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:21.016 20:35:45 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:21.016 20:35:45 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:21.016 20:35:45 -- setup/devices.sh@161 -- # break 00:04:21.016 20:35:45 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:21.016 20:35:45 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:21.016 20:35:45 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:21.016 20:35:45 -- setup/devices.sh@166 -- # dm=dm-0 00:04:21.016 20:35:45 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:21.016 20:35:45 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:21.016 20:35:45 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.016 20:35:45 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:21.016 20:35:45 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.016 20:35:45 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:21.016 20:35:45 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:21.016 20:35:45 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.016 20:35:45 -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:21.016 20:35:45 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:21.016 20:35:45 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:21.016 20:35:45 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:21.016 20:35:45 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:21.016 20:35:45 -- setup/devices.sh@53 -- # local found=0 00:04:21.016 20:35:45 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:21.016 20:35:45 -- setup/devices.sh@56 -- # : 00:04:21.016 20:35:45 -- setup/devices.sh@59 -- # local pci status 00:04:21.016 20:35:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.016 20:35:45 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:21.016 20:35:45 -- setup/devices.sh@47 -- # setup output config 00:04:21.016 20:35:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.016 20:35:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:24.318 20:35:48 -- setup/devices.sh@63 -- # found=1 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:24.318 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.318 20:35:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.318 20:35:48 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:24.318 20:35:48 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.318 20:35:48 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:24.318 20:35:48 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:24.318 20:35:48 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:24.318 20:35:48 -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:24.578 20:35:48 -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:24.578 20:35:48 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:24.578 20:35:48 -- setup/devices.sh@50 -- # local mount_point= 00:04:24.578 20:35:48 -- setup/devices.sh@51 -- # local test_file= 00:04:24.578 20:35:48 -- setup/devices.sh@53 -- # local found=0 00:04:24.578 20:35:48 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:24.578 20:35:48 -- setup/devices.sh@59 -- # local pci status 00:04:24.578 20:35:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.578 20:35:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:24.578 20:35:48 -- setup/devices.sh@47 -- # setup output config 00:04:24.578 20:35:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.578 20:35:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:27.878 20:35:51 -- setup/devices.sh@63 -- # found=1 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:51 -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:27.878 20:35:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.878 20:35:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.878 20:35:52 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:27.878 20:35:52 -- setup/devices.sh@68 -- # return 0 00:04:27.878 20:35:52 -- setup/devices.sh@187 -- # cleanup_dm 00:04:27.878 20:35:52 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.878 20:35:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:27.878 20:35:52 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:27.878 20:35:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.878 20:35:52 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:27.878 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.878 20:35:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:27.878 20:35:52 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:27.878 00:04:27.878 real 0m10.127s 00:04:27.878 user 0m2.420s 00:04:27.878 sys 0m4.673s 00:04:27.878 20:35:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:27.878 20:35:52 -- common/autotest_common.sh@10 -- # set +x 00:04:27.878 ************************************ 00:04:27.878 END TEST dm_mount 00:04:27.878 ************************************ 00:04:27.878 20:35:52 -- setup/devices.sh@1 -- # cleanup 00:04:27.878 20:35:52 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:27.878 20:35:52 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.878 20:35:52 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.878 20:35:52 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:27.878 20:35:52 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.878 20:35:52 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:28.139 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:28.139 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:28.139 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:28.139 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:28.139 20:35:52 -- setup/devices.sh@12 -- # cleanup_dm 00:04:28.139 20:35:52 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:28.139 20:35:52 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:28.139 20:35:52 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.139 20:35:52 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:28.139 20:35:52 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.139 20:35:52 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:28.139 00:04:28.139 real 0m28.081s 00:04:28.139 user 0m8.076s 00:04:28.139 sys 0m14.594s 00:04:28.139 20:35:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:28.139 20:35:52 -- common/autotest_common.sh@10 -- # set +x 00:04:28.139 ************************************ 00:04:28.139 END TEST devices 00:04:28.139 ************************************ 00:04:28.139 00:04:28.139 real 1m36.406s 00:04:28.139 user 0m31.374s 00:04:28.139 sys 0m55.243s 00:04:28.139 20:35:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:28.139 20:35:52 -- common/autotest_common.sh@10 -- # set +x 00:04:28.139 ************************************ 00:04:28.139 END TEST setup.sh 00:04:28.139 ************************************ 00:04:28.399 20:35:52 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:31.705 Hugepages 00:04:31.705 node hugesize free / total 00:04:31.705 node0 1048576kB 0 / 0 00:04:31.705 node0 2048kB 2048 / 2048 00:04:31.705 node1 1048576kB 0 / 0 00:04:31.705 node1 2048kB 0 / 0 00:04:31.705 00:04:31.705 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:31.705 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:31.705 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:31.705 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:31.705 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:31.705 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:31.705 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:31.705 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:31.705 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:31.705 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:31.705 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:31.705 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:31.705 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:31.705 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:31.705 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:31.705 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:31.705 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:31.705 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:31.705 20:35:56 -- spdk/autotest.sh@130 -- # uname -s 00:04:31.705 20:35:56 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:31.705 20:35:56 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:31.705 20:35:56 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.086 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:35.086 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:35.086 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:35.086 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:35.086 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:35.086 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:35.086 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:35.086 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:35.345 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:35.345 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:35.345 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:35.345 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:35.346 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:35.346 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:35.346 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:35.346 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:37.254 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:37.515 20:36:01 -- common/autotest_common.sh@1518 -- # sleep 1 00:04:38.456 20:36:02 -- common/autotest_common.sh@1519 -- # bdfs=() 00:04:38.456 20:36:02 -- common/autotest_common.sh@1519 -- # local bdfs 00:04:38.456 20:36:02 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:38.456 20:36:02 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:38.456 20:36:02 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:38.456 20:36:02 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:38.456 20:36:02 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:38.456 20:36:02 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:38.456 20:36:02 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:38.456 20:36:03 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:38.456 20:36:03 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:38.456 20:36:03 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.761 Waiting for block devices as requested 00:04:41.761 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:42.021 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:42.021 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:42.021 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:42.282 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:42.282 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:42.282 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:42.543 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:42.543 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:42.804 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:42.804 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:42.804 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:43.065 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:43.065 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:43.065 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:43.065 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:43.327 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:43.638 20:36:08 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:43.638 20:36:08 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:43.638 20:36:08 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 00:04:43.638 20:36:08 -- common/autotest_common.sh@1488 -- # grep 0000:65:00.0/nvme/nvme 00:04:43.638 20:36:08 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:43.638 20:36:08 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:43.638 20:36:08 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:43.638 20:36:08 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:04:43.638 20:36:08 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:43.638 20:36:08 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:43.638 20:36:08 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:43.638 20:36:08 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:43.638 20:36:08 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:43.638 20:36:08 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:43.638 20:36:08 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:43.638 20:36:08 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:43.638 20:36:08 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:43.638 20:36:08 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:43.638 20:36:08 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:43.638 20:36:08 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:43.638 20:36:08 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:43.638 20:36:08 -- common/autotest_common.sh@1543 -- # continue 00:04:43.638 20:36:08 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:43.638 20:36:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:43.638 20:36:08 -- common/autotest_common.sh@10 -- # set +x 00:04:43.638 20:36:08 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:43.638 20:36:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:43.638 20:36:08 -- common/autotest_common.sh@10 -- # set +x 00:04:43.638 20:36:08 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.942 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:46.942 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:46.942 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:46.942 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:47.203 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:47.775 20:36:12 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:47.775 20:36:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.775 20:36:12 -- common/autotest_common.sh@10 -- # set +x 00:04:47.775 20:36:12 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:47.775 20:36:12 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:04:47.775 20:36:12 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:04:47.775 20:36:12 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:47.775 20:36:12 -- common/autotest_common.sh@1563 -- # local bdfs 00:04:47.775 20:36:12 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:04:47.775 20:36:12 -- common/autotest_common.sh@1499 -- # bdfs=() 00:04:47.775 20:36:12 -- common/autotest_common.sh@1499 -- # local bdfs 00:04:47.775 20:36:12 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:47.775 20:36:12 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:47.775 20:36:12 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:04:47.775 20:36:12 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:04:47.775 20:36:12 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:04:47.775 20:36:12 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:04:47.775 20:36:12 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:47.775 20:36:12 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:47.775 20:36:12 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:47.775 20:36:12 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:04:47.775 20:36:12 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:04:47.775 20:36:12 -- common/autotest_common.sh@1579 -- # return 0 00:04:47.775 20:36:12 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:47.775 20:36:12 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:47.775 20:36:12 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:47.775 20:36:12 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:47.775 20:36:12 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:47.775 20:36:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:47.775 20:36:12 -- common/autotest_common.sh@10 -- # set +x 00:04:47.775 20:36:12 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:47.775 20:36:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:47.775 20:36:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:47.775 20:36:12 -- common/autotest_common.sh@10 -- # set +x 00:04:48.036 ************************************ 00:04:48.036 START TEST env 00:04:48.036 ************************************ 00:04:48.036 20:36:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:48.036 * Looking for test storage... 00:04:48.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:48.036 20:36:12 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.036 20:36:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.036 20:36:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.036 20:36:12 -- common/autotest_common.sh@10 -- # set +x 00:04:48.036 ************************************ 00:04:48.036 START TEST env_memory 00:04:48.036 ************************************ 00:04:48.036 20:36:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.036 00:04:48.036 00:04:48.036 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.036 http://cunit.sourceforge.net/ 00:04:48.036 00:04:48.036 00:04:48.036 Suite: memory 00:04:48.296 Test: alloc and free memory map ...[2024-04-24 20:36:12.701013] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:48.296 passed 00:04:48.296 Test: mem map translation ...[2024-04-24 20:36:12.726691] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:48.296 [2024-04-24 20:36:12.726720] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:48.296 [2024-04-24 20:36:12.726774] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:48.296 [2024-04-24 20:36:12.726782] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:48.296 passed 00:04:48.296 Test: mem map registration ...[2024-04-24 20:36:12.782072] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:48.296 [2024-04-24 20:36:12.782093] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:48.296 passed 00:04:48.296 Test: mem map adjacent registrations ...passed 00:04:48.296 00:04:48.296 Run Summary: Type Total Ran Passed Failed Inactive 00:04:48.296 suites 1 1 n/a 0 0 00:04:48.296 tests 4 4 4 0 0 00:04:48.296 asserts 152 152 152 0 n/a 00:04:48.296 00:04:48.296 Elapsed time = 0.194 seconds 00:04:48.296 00:04:48.296 real 0m0.208s 00:04:48.296 user 0m0.195s 00:04:48.296 sys 0m0.012s 00:04:48.296 20:36:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:48.296 20:36:12 -- common/autotest_common.sh@10 -- # set +x 00:04:48.296 ************************************ 00:04:48.296 END TEST env_memory 00:04:48.296 ************************************ 00:04:48.296 20:36:12 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.296 20:36:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:48.296 20:36:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:48.296 20:36:12 -- common/autotest_common.sh@10 -- # set +x 00:04:48.559 ************************************ 00:04:48.559 START TEST env_vtophys 00:04:48.559 ************************************ 00:04:48.559 20:36:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:48.559 EAL: lib.eal log level changed from notice to debug 00:04:48.559 EAL: Detected lcore 0 as core 0 on socket 0 00:04:48.559 EAL: Detected lcore 1 as core 1 on socket 0 00:04:48.559 EAL: Detected lcore 2 as core 2 on socket 0 00:04:48.559 EAL: Detected lcore 3 as core 3 on socket 0 00:04:48.559 EAL: Detected lcore 4 as core 4 on socket 0 00:04:48.559 EAL: Detected lcore 5 as core 5 on socket 0 00:04:48.559 EAL: Detected lcore 6 as core 6 on socket 0 00:04:48.559 EAL: Detected lcore 7 as core 7 on socket 0 00:04:48.559 EAL: Detected lcore 8 as core 8 on socket 0 00:04:48.559 EAL: Detected lcore 9 as core 9 on socket 0 00:04:48.559 EAL: Detected lcore 10 as core 10 on socket 0 00:04:48.559 EAL: Detected lcore 11 as core 11 on socket 0 00:04:48.559 EAL: Detected lcore 12 as core 12 on socket 0 00:04:48.559 EAL: Detected lcore 13 as core 13 on socket 0 00:04:48.559 EAL: Detected lcore 14 as core 14 on socket 0 00:04:48.559 EAL: Detected lcore 15 as core 15 on socket 0 00:04:48.559 EAL: Detected lcore 16 as core 16 on socket 0 00:04:48.559 EAL: Detected lcore 17 as core 17 on socket 0 00:04:48.559 EAL: Detected lcore 18 as core 18 on socket 0 00:04:48.559 EAL: Detected lcore 19 as core 19 on socket 0 00:04:48.559 EAL: Detected lcore 20 as core 20 on socket 0 00:04:48.559 EAL: Detected lcore 21 as core 21 on socket 0 00:04:48.559 EAL: Detected lcore 22 as core 22 on socket 0 00:04:48.559 EAL: Detected lcore 23 as core 23 on socket 0 00:04:48.559 EAL: Detected lcore 24 as core 24 on socket 0 00:04:48.559 EAL: Detected lcore 25 as core 25 on socket 0 00:04:48.559 EAL: Detected lcore 26 as core 26 on socket 0 00:04:48.559 EAL: Detected lcore 27 as core 27 on socket 0 00:04:48.559 EAL: Detected lcore 28 as core 28 on socket 0 00:04:48.559 EAL: Detected lcore 29 as core 29 on socket 0 00:04:48.559 EAL: Detected lcore 30 as core 30 on socket 0 00:04:48.559 EAL: Detected lcore 31 as core 31 on socket 0 00:04:48.559 EAL: Detected lcore 32 as core 32 on socket 0 00:04:48.559 EAL: Detected lcore 33 as core 33 on socket 0 00:04:48.559 EAL: Detected lcore 34 as core 34 on socket 0 00:04:48.559 EAL: Detected lcore 35 as core 35 on socket 0 00:04:48.559 EAL: Detected lcore 36 as core 0 on socket 1 00:04:48.559 EAL: Detected lcore 37 as core 1 on socket 1 00:04:48.559 EAL: Detected lcore 38 as core 2 on socket 1 00:04:48.559 EAL: Detected lcore 39 as core 3 on socket 1 00:04:48.559 EAL: Detected lcore 40 as core 4 on socket 1 00:04:48.559 EAL: Detected lcore 41 as core 5 on socket 1 00:04:48.559 EAL: Detected lcore 42 as core 6 on socket 1 00:04:48.559 EAL: Detected lcore 43 as core 7 on socket 1 00:04:48.559 EAL: Detected lcore 44 as core 8 on socket 1 00:04:48.559 EAL: Detected lcore 45 as core 9 on socket 1 00:04:48.559 EAL: Detected lcore 46 as core 10 on socket 1 00:04:48.559 EAL: Detected lcore 47 as core 11 on socket 1 00:04:48.559 EAL: Detected lcore 48 as core 12 on socket 1 00:04:48.560 EAL: Detected lcore 49 as core 13 on socket 1 00:04:48.560 EAL: Detected lcore 50 as core 14 on socket 1 00:04:48.560 EAL: Detected lcore 51 as core 15 on socket 1 00:04:48.560 EAL: Detected lcore 52 as core 16 on socket 1 00:04:48.560 EAL: Detected lcore 53 as core 17 on socket 1 00:04:48.560 EAL: Detected lcore 54 as core 18 on socket 1 00:04:48.560 EAL: Detected lcore 55 as core 19 on socket 1 00:04:48.560 EAL: Detected lcore 56 as core 20 on socket 1 00:04:48.560 EAL: Detected lcore 57 as core 21 on socket 1 00:04:48.560 EAL: Detected lcore 58 as core 22 on socket 1 00:04:48.560 EAL: Detected lcore 59 as core 23 on socket 1 00:04:48.560 EAL: Detected lcore 60 as core 24 on socket 1 00:04:48.560 EAL: Detected lcore 61 as core 25 on socket 1 00:04:48.560 EAL: Detected lcore 62 as core 26 on socket 1 00:04:48.560 EAL: Detected lcore 63 as core 27 on socket 1 00:04:48.560 EAL: Detected lcore 64 as core 28 on socket 1 00:04:48.560 EAL: Detected lcore 65 as core 29 on socket 1 00:04:48.560 EAL: Detected lcore 66 as core 30 on socket 1 00:04:48.560 EAL: Detected lcore 67 as core 31 on socket 1 00:04:48.560 EAL: Detected lcore 68 as core 32 on socket 1 00:04:48.560 EAL: Detected lcore 69 as core 33 on socket 1 00:04:48.560 EAL: Detected lcore 70 as core 34 on socket 1 00:04:48.560 EAL: Detected lcore 71 as core 35 on socket 1 00:04:48.560 EAL: Detected lcore 72 as core 0 on socket 0 00:04:48.560 EAL: Detected lcore 73 as core 1 on socket 0 00:04:48.560 EAL: Detected lcore 74 as core 2 on socket 0 00:04:48.560 EAL: Detected lcore 75 as core 3 on socket 0 00:04:48.560 EAL: Detected lcore 76 as core 4 on socket 0 00:04:48.560 EAL: Detected lcore 77 as core 5 on socket 0 00:04:48.560 EAL: Detected lcore 78 as core 6 on socket 0 00:04:48.560 EAL: Detected lcore 79 as core 7 on socket 0 00:04:48.560 EAL: Detected lcore 80 as core 8 on socket 0 00:04:48.560 EAL: Detected lcore 81 as core 9 on socket 0 00:04:48.560 EAL: Detected lcore 82 as core 10 on socket 0 00:04:48.560 EAL: Detected lcore 83 as core 11 on socket 0 00:04:48.560 EAL: Detected lcore 84 as core 12 on socket 0 00:04:48.560 EAL: Detected lcore 85 as core 13 on socket 0 00:04:48.560 EAL: Detected lcore 86 as core 14 on socket 0 00:04:48.560 EAL: Detected lcore 87 as core 15 on socket 0 00:04:48.560 EAL: Detected lcore 88 as core 16 on socket 0 00:04:48.560 EAL: Detected lcore 89 as core 17 on socket 0 00:04:48.560 EAL: Detected lcore 90 as core 18 on socket 0 00:04:48.560 EAL: Detected lcore 91 as core 19 on socket 0 00:04:48.560 EAL: Detected lcore 92 as core 20 on socket 0 00:04:48.560 EAL: Detected lcore 93 as core 21 on socket 0 00:04:48.560 EAL: Detected lcore 94 as core 22 on socket 0 00:04:48.560 EAL: Detected lcore 95 as core 23 on socket 0 00:04:48.560 EAL: Detected lcore 96 as core 24 on socket 0 00:04:48.560 EAL: Detected lcore 97 as core 25 on socket 0 00:04:48.560 EAL: Detected lcore 98 as core 26 on socket 0 00:04:48.560 EAL: Detected lcore 99 as core 27 on socket 0 00:04:48.560 EAL: Detected lcore 100 as core 28 on socket 0 00:04:48.560 EAL: Detected lcore 101 as core 29 on socket 0 00:04:48.560 EAL: Detected lcore 102 as core 30 on socket 0 00:04:48.560 EAL: Detected lcore 103 as core 31 on socket 0 00:04:48.560 EAL: Detected lcore 104 as core 32 on socket 0 00:04:48.560 EAL: Detected lcore 105 as core 33 on socket 0 00:04:48.560 EAL: Detected lcore 106 as core 34 on socket 0 00:04:48.560 EAL: Detected lcore 107 as core 35 on socket 0 00:04:48.560 EAL: Detected lcore 108 as core 0 on socket 1 00:04:48.560 EAL: Detected lcore 109 as core 1 on socket 1 00:04:48.560 EAL: Detected lcore 110 as core 2 on socket 1 00:04:48.560 EAL: Detected lcore 111 as core 3 on socket 1 00:04:48.560 EAL: Detected lcore 112 as core 4 on socket 1 00:04:48.560 EAL: Detected lcore 113 as core 5 on socket 1 00:04:48.560 EAL: Detected lcore 114 as core 6 on socket 1 00:04:48.560 EAL: Detected lcore 115 as core 7 on socket 1 00:04:48.560 EAL: Detected lcore 116 as core 8 on socket 1 00:04:48.560 EAL: Detected lcore 117 as core 9 on socket 1 00:04:48.560 EAL: Detected lcore 118 as core 10 on socket 1 00:04:48.560 EAL: Detected lcore 119 as core 11 on socket 1 00:04:48.560 EAL: Detected lcore 120 as core 12 on socket 1 00:04:48.560 EAL: Detected lcore 121 as core 13 on socket 1 00:04:48.560 EAL: Detected lcore 122 as core 14 on socket 1 00:04:48.560 EAL: Detected lcore 123 as core 15 on socket 1 00:04:48.560 EAL: Detected lcore 124 as core 16 on socket 1 00:04:48.560 EAL: Detected lcore 125 as core 17 on socket 1 00:04:48.560 EAL: Detected lcore 126 as core 18 on socket 1 00:04:48.560 EAL: Detected lcore 127 as core 19 on socket 1 00:04:48.560 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:48.560 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:48.560 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:48.560 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:48.560 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:48.560 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:48.560 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:48.560 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:48.560 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:48.560 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:48.560 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:48.560 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:48.560 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:48.560 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:48.560 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:48.560 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:48.560 EAL: Maximum logical cores by configuration: 128 00:04:48.560 EAL: Detected CPU lcores: 128 00:04:48.560 EAL: Detected NUMA nodes: 2 00:04:48.560 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:48.560 EAL: Detected shared linkage of DPDK 00:04:48.560 EAL: No shared files mode enabled, IPC will be disabled 00:04:48.560 EAL: Bus pci wants IOVA as 'DC' 00:04:48.560 EAL: Buses did not request a specific IOVA mode. 00:04:48.560 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:48.560 EAL: Selected IOVA mode 'VA' 00:04:48.560 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.560 EAL: Probing VFIO support... 00:04:48.560 EAL: IOMMU type 1 (Type 1) is supported 00:04:48.560 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:48.560 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:48.560 EAL: VFIO support initialized 00:04:48.560 EAL: Ask a virtual area of 0x2e000 bytes 00:04:48.560 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:48.560 EAL: Setting up physically contiguous memory... 00:04:48.560 EAL: Setting maximum number of open files to 524288 00:04:48.560 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:48.560 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:48.560 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:48.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.560 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:48.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.560 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:48.560 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:48.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.560 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:48.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.560 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:48.560 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:48.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.560 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:48.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.560 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:48.560 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:48.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.560 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:48.560 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:48.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.560 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:48.560 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:48.560 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:48.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.560 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:48.560 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.560 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:48.560 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:48.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.560 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:48.560 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.560 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:48.560 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:48.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.560 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:48.560 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.560 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:48.560 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:48.560 EAL: Ask a virtual area of 0x61000 bytes 00:04:48.560 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:48.560 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:48.560 EAL: Ask a virtual area of 0x400000000 bytes 00:04:48.560 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:48.560 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:48.560 EAL: Hugepages will be freed exactly as allocated. 00:04:48.560 EAL: No shared files mode enabled, IPC is disabled 00:04:48.560 EAL: No shared files mode enabled, IPC is disabled 00:04:48.560 EAL: TSC frequency is ~2400000 KHz 00:04:48.560 EAL: Main lcore 0 is ready (tid=7fe219709a00;cpuset=[0]) 00:04:48.560 EAL: Trying to obtain current memory policy. 00:04:48.560 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.560 EAL: Restoring previous memory policy: 0 00:04:48.560 EAL: request: mp_malloc_sync 00:04:48.560 EAL: No shared files mode enabled, IPC is disabled 00:04:48.560 EAL: Heap on socket 0 was expanded by 2MB 00:04:48.560 EAL: No shared files mode enabled, IPC is disabled 00:04:48.560 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:48.560 EAL: Mem event callback 'spdk:(nil)' registered 00:04:48.561 00:04:48.561 00:04:48.561 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.561 http://cunit.sourceforge.net/ 00:04:48.561 00:04:48.561 00:04:48.561 Suite: components_suite 00:04:48.561 Test: vtophys_malloc_test ...passed 00:04:48.561 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:48.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.561 EAL: Restoring previous memory policy: 4 00:04:48.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.561 EAL: request: mp_malloc_sync 00:04:48.561 EAL: No shared files mode enabled, IPC is disabled 00:04:48.561 EAL: Heap on socket 0 was expanded by 4MB 00:04:48.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.561 EAL: request: mp_malloc_sync 00:04:48.561 EAL: No shared files mode enabled, IPC is disabled 00:04:48.561 EAL: Heap on socket 0 was shrunk by 4MB 00:04:48.561 EAL: Trying to obtain current memory policy. 00:04:48.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.561 EAL: Restoring previous memory policy: 4 00:04:48.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.561 EAL: request: mp_malloc_sync 00:04:48.561 EAL: No shared files mode enabled, IPC is disabled 00:04:48.561 EAL: Heap on socket 0 was expanded by 6MB 00:04:48.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.561 EAL: request: mp_malloc_sync 00:04:48.561 EAL: No shared files mode enabled, IPC is disabled 00:04:48.561 EAL: Heap on socket 0 was shrunk by 6MB 00:04:48.561 EAL: Trying to obtain current memory policy. 00:04:48.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.561 EAL: Restoring previous memory policy: 4 00:04:48.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.561 EAL: request: mp_malloc_sync 00:04:48.561 EAL: No shared files mode enabled, IPC is disabled 00:04:48.561 EAL: Heap on socket 0 was expanded by 10MB 00:04:48.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.561 EAL: request: mp_malloc_sync 00:04:48.561 EAL: No shared files mode enabled, IPC is disabled 00:04:48.561 EAL: Heap on socket 0 was shrunk by 10MB 00:04:48.561 EAL: Trying to obtain current memory policy. 00:04:48.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.561 EAL: Restoring previous memory policy: 4 00:04:48.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.561 EAL: request: mp_malloc_sync 00:04:48.561 EAL: No shared files mode enabled, IPC is disabled 00:04:48.561 EAL: Heap on socket 0 was expanded by 18MB 00:04:48.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.561 EAL: request: mp_malloc_sync 00:04:48.561 EAL: No shared files mode enabled, IPC is disabled 00:04:48.561 EAL: Heap on socket 0 was shrunk by 18MB 00:04:48.561 EAL: Trying to obtain current memory policy. 00:04:48.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.561 EAL: Restoring previous memory policy: 4 00:04:48.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.561 EAL: request: mp_malloc_sync 00:04:48.561 EAL: No shared files mode enabled, IPC is disabled 00:04:48.561 EAL: Heap on socket 0 was expanded by 34MB 00:04:48.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.561 EAL: request: mp_malloc_sync 00:04:48.561 EAL: No shared files mode enabled, IPC is disabled 00:04:48.561 EAL: Heap on socket 0 was shrunk by 34MB 00:04:48.561 EAL: Trying to obtain current memory policy. 00:04:48.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.561 EAL: Restoring previous memory policy: 4 00:04:48.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.561 EAL: request: mp_malloc_sync 00:04:48.561 EAL: No shared files mode enabled, IPC is disabled 00:04:48.561 EAL: Heap on socket 0 was expanded by 66MB 00:04:48.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.561 EAL: request: mp_malloc_sync 00:04:48.561 EAL: No shared files mode enabled, IPC is disabled 00:04:48.561 EAL: Heap on socket 0 was shrunk by 66MB 00:04:48.561 EAL: Trying to obtain current memory policy. 00:04:48.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.835 EAL: Restoring previous memory policy: 4 00:04:48.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.835 EAL: request: mp_malloc_sync 00:04:48.835 EAL: No shared files mode enabled, IPC is disabled 00:04:48.835 EAL: Heap on socket 0 was expanded by 130MB 00:04:48.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.835 EAL: request: mp_malloc_sync 00:04:48.835 EAL: No shared files mode enabled, IPC is disabled 00:04:48.835 EAL: Heap on socket 0 was shrunk by 130MB 00:04:48.835 EAL: Trying to obtain current memory policy. 00:04:48.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.835 EAL: Restoring previous memory policy: 4 00:04:48.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.835 EAL: request: mp_malloc_sync 00:04:48.835 EAL: No shared files mode enabled, IPC is disabled 00:04:48.835 EAL: Heap on socket 0 was expanded by 258MB 00:04:48.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.835 EAL: request: mp_malloc_sync 00:04:48.835 EAL: No shared files mode enabled, IPC is disabled 00:04:48.835 EAL: Heap on socket 0 was shrunk by 258MB 00:04:48.835 EAL: Trying to obtain current memory policy. 00:04:48.835 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.835 EAL: Restoring previous memory policy: 4 00:04:48.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.835 EAL: request: mp_malloc_sync 00:04:48.835 EAL: No shared files mode enabled, IPC is disabled 00:04:48.835 EAL: Heap on socket 0 was expanded by 514MB 00:04:48.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.095 EAL: request: mp_malloc_sync 00:04:49.095 EAL: No shared files mode enabled, IPC is disabled 00:04:49.095 EAL: Heap on socket 0 was shrunk by 514MB 00:04:49.095 EAL: Trying to obtain current memory policy. 00:04:49.095 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.095 EAL: Restoring previous memory policy: 4 00:04:49.095 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.095 EAL: request: mp_malloc_sync 00:04:49.095 EAL: No shared files mode enabled, IPC is disabled 00:04:49.095 EAL: Heap on socket 0 was expanded by 1026MB 00:04:49.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.355 EAL: request: mp_malloc_sync 00:04:49.355 EAL: No shared files mode enabled, IPC is disabled 00:04:49.355 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:49.355 passed 00:04:49.355 00:04:49.355 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.355 suites 1 1 n/a 0 0 00:04:49.355 tests 2 2 2 0 0 00:04:49.355 asserts 497 497 497 0 n/a 00:04:49.355 00:04:49.355 Elapsed time = 0.682 seconds 00:04:49.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.355 EAL: request: mp_malloc_sync 00:04:49.355 EAL: No shared files mode enabled, IPC is disabled 00:04:49.355 EAL: Heap on socket 0 was shrunk by 2MB 00:04:49.355 EAL: No shared files mode enabled, IPC is disabled 00:04:49.355 EAL: No shared files mode enabled, IPC is disabled 00:04:49.355 EAL: No shared files mode enabled, IPC is disabled 00:04:49.355 00:04:49.355 real 0m0.814s 00:04:49.355 user 0m0.421s 00:04:49.355 sys 0m0.368s 00:04:49.355 20:36:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.355 20:36:13 -- common/autotest_common.sh@10 -- # set +x 00:04:49.355 ************************************ 00:04:49.355 END TEST env_vtophys 00:04:49.355 ************************************ 00:04:49.355 20:36:13 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.355 20:36:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.355 20:36:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.355 20:36:13 -- common/autotest_common.sh@10 -- # set +x 00:04:49.621 ************************************ 00:04:49.621 START TEST env_pci 00:04:49.621 ************************************ 00:04:49.621 20:36:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:49.621 00:04:49.621 00:04:49.621 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.621 http://cunit.sourceforge.net/ 00:04:49.621 00:04:49.621 00:04:49.621 Suite: pci 00:04:49.621 Test: pci_hook ...[2024-04-24 20:36:14.014202] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2566163 has claimed it 00:04:49.621 EAL: Cannot find device (10000:00:01.0) 00:04:49.621 EAL: Failed to attach device on primary process 00:04:49.621 passed 00:04:49.621 00:04:49.621 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.621 suites 1 1 n/a 0 0 00:04:49.621 tests 1 1 1 0 0 00:04:49.621 asserts 25 25 25 0 n/a 00:04:49.621 00:04:49.621 Elapsed time = 0.015 seconds 00:04:49.621 00:04:49.621 real 0m0.024s 00:04:49.621 user 0m0.008s 00:04:49.621 sys 0m0.015s 00:04:49.621 20:36:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:49.621 20:36:14 -- common/autotest_common.sh@10 -- # set +x 00:04:49.621 ************************************ 00:04:49.621 END TEST env_pci 00:04:49.621 ************************************ 00:04:49.621 20:36:14 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:49.621 20:36:14 -- env/env.sh@15 -- # uname 00:04:49.621 20:36:14 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:49.621 20:36:14 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:49.621 20:36:14 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.622 20:36:14 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:49.622 20:36:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.622 20:36:14 -- common/autotest_common.sh@10 -- # set +x 00:04:49.622 ************************************ 00:04:49.622 START TEST env_dpdk_post_init 00:04:49.622 ************************************ 00:04:49.622 20:36:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:49.622 EAL: Detected CPU lcores: 128 00:04:49.622 EAL: Detected NUMA nodes: 2 00:04:49.622 EAL: Detected shared linkage of DPDK 00:04:49.622 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:49.882 EAL: Selected IOVA mode 'VA' 00:04:49.882 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.882 EAL: VFIO support initialized 00:04:49.882 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:49.882 EAL: Using IOMMU type 1 (Type 1) 00:04:49.882 EAL: Ignore mapping IO port bar(1) 00:04:50.143 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:50.143 EAL: Ignore mapping IO port bar(1) 00:04:50.403 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:50.403 EAL: Ignore mapping IO port bar(1) 00:04:50.403 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:50.663 EAL: Ignore mapping IO port bar(1) 00:04:50.663 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:50.923 EAL: Ignore mapping IO port bar(1) 00:04:50.923 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:51.183 EAL: Ignore mapping IO port bar(1) 00:04:51.183 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:51.183 EAL: Ignore mapping IO port bar(1) 00:04:51.443 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:51.443 EAL: Ignore mapping IO port bar(1) 00:04:51.703 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:51.963 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:51.963 EAL: Ignore mapping IO port bar(1) 00:04:51.963 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:52.223 EAL: Ignore mapping IO port bar(1) 00:04:52.223 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:52.483 EAL: Ignore mapping IO port bar(1) 00:04:52.483 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:52.743 EAL: Ignore mapping IO port bar(1) 00:04:52.743 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:53.003 EAL: Ignore mapping IO port bar(1) 00:04:53.003 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:53.003 EAL: Ignore mapping IO port bar(1) 00:04:53.264 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:53.264 EAL: Ignore mapping IO port bar(1) 00:04:53.524 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:53.524 EAL: Ignore mapping IO port bar(1) 00:04:53.524 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:53.784 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:53.784 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:53.784 Starting DPDK initialization... 00:04:53.784 Starting SPDK post initialization... 00:04:53.784 SPDK NVMe probe 00:04:53.784 Attaching to 0000:65:00.0 00:04:53.784 Attached to 0000:65:00.0 00:04:53.784 Cleaning up... 00:04:55.695 00:04:55.695 real 0m5.727s 00:04:55.695 user 0m0.190s 00:04:55.695 sys 0m0.092s 00:04:55.695 20:36:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.695 20:36:19 -- common/autotest_common.sh@10 -- # set +x 00:04:55.695 ************************************ 00:04:55.695 END TEST env_dpdk_post_init 00:04:55.695 ************************************ 00:04:55.695 20:36:19 -- env/env.sh@26 -- # uname 00:04:55.695 20:36:19 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:55.696 20:36:19 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.696 20:36:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.696 20:36:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.696 20:36:19 -- common/autotest_common.sh@10 -- # set +x 00:04:55.696 ************************************ 00:04:55.696 START TEST env_mem_callbacks 00:04:55.696 ************************************ 00:04:55.696 20:36:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.696 EAL: Detected CPU lcores: 128 00:04:55.696 EAL: Detected NUMA nodes: 2 00:04:55.696 EAL: Detected shared linkage of DPDK 00:04:55.696 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.696 EAL: Selected IOVA mode 'VA' 00:04:55.696 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.696 EAL: VFIO support initialized 00:04:55.696 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.696 00:04:55.696 00:04:55.696 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.696 http://cunit.sourceforge.net/ 00:04:55.696 00:04:55.696 00:04:55.696 Suite: memory 00:04:55.696 Test: test ... 00:04:55.696 register 0x200000200000 2097152 00:04:55.696 malloc 3145728 00:04:55.696 register 0x200000400000 4194304 00:04:55.696 buf 0x200000500000 len 3145728 PASSED 00:04:55.696 malloc 64 00:04:55.696 buf 0x2000004fff40 len 64 PASSED 00:04:55.696 malloc 4194304 00:04:55.696 register 0x200000800000 6291456 00:04:55.696 buf 0x200000a00000 len 4194304 PASSED 00:04:55.696 free 0x200000500000 3145728 00:04:55.696 free 0x2000004fff40 64 00:04:55.696 unregister 0x200000400000 4194304 PASSED 00:04:55.696 free 0x200000a00000 4194304 00:04:55.696 unregister 0x200000800000 6291456 PASSED 00:04:55.696 malloc 8388608 00:04:55.696 register 0x200000400000 10485760 00:04:55.696 buf 0x200000600000 len 8388608 PASSED 00:04:55.696 free 0x200000600000 8388608 00:04:55.696 unregister 0x200000400000 10485760 PASSED 00:04:55.696 passed 00:04:55.696 00:04:55.696 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.696 suites 1 1 n/a 0 0 00:04:55.696 tests 1 1 1 0 0 00:04:55.696 asserts 15 15 15 0 n/a 00:04:55.696 00:04:55.696 Elapsed time = 0.010 seconds 00:04:55.696 00:04:55.696 real 0m0.068s 00:04:55.696 user 0m0.025s 00:04:55.696 sys 0m0.043s 00:04:55.696 20:36:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.696 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:04:55.696 ************************************ 00:04:55.696 END TEST env_mem_callbacks 00:04:55.696 ************************************ 00:04:55.696 00:04:55.696 real 0m7.842s 00:04:55.696 user 0m1.234s 00:04:55.696 sys 0m1.066s 00:04:55.696 20:36:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:55.696 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:04:55.696 ************************************ 00:04:55.696 END TEST env 00:04:55.696 ************************************ 00:04:55.696 20:36:20 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:55.696 20:36:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.696 20:36:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.696 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:04:55.956 ************************************ 00:04:55.956 START TEST rpc 00:04:55.956 ************************************ 00:04:55.956 20:36:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:55.956 * Looking for test storage... 00:04:55.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.956 20:36:20 -- rpc/rpc.sh@65 -- # spdk_pid=2567545 00:04:55.956 20:36:20 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.956 20:36:20 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:55.956 20:36:20 -- rpc/rpc.sh@67 -- # waitforlisten 2567545 00:04:55.956 20:36:20 -- common/autotest_common.sh@817 -- # '[' -z 2567545 ']' 00:04:55.956 20:36:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.956 20:36:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:55.956 20:36:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.956 20:36:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:55.956 20:36:20 -- common/autotest_common.sh@10 -- # set +x 00:04:56.216 [2024-04-24 20:36:20.604167] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:04:56.216 [2024-04-24 20:36:20.604215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567545 ] 00:04:56.216 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.216 [2024-04-24 20:36:20.682262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.216 [2024-04-24 20:36:20.768818] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:56.216 [2024-04-24 20:36:20.768869] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2567545' to capture a snapshot of events at runtime. 00:04:56.216 [2024-04-24 20:36:20.768877] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:56.216 [2024-04-24 20:36:20.768884] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:56.216 [2024-04-24 20:36:20.768890] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2567545 for offline analysis/debug. 00:04:56.216 [2024-04-24 20:36:20.768916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.156 20:36:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:57.156 20:36:21 -- common/autotest_common.sh@850 -- # return 0 00:04:57.156 20:36:21 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:57.156 20:36:21 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:57.156 20:36:21 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:57.157 20:36:21 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:57.157 20:36:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.157 20:36:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.157 20:36:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.157 ************************************ 00:04:57.157 START TEST rpc_integrity 00:04:57.157 ************************************ 00:04:57.157 20:36:21 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:57.157 20:36:21 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:57.157 20:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.157 20:36:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.157 20:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.157 20:36:21 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:57.157 20:36:21 -- rpc/rpc.sh@13 -- # jq length 00:04:57.157 20:36:21 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:57.157 20:36:21 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:57.157 20:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.157 20:36:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.157 20:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.157 20:36:21 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:57.157 20:36:21 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:57.157 20:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.157 20:36:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.157 20:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.157 20:36:21 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:57.157 { 00:04:57.157 "name": "Malloc0", 00:04:57.157 "aliases": [ 00:04:57.157 "8c91cfec-4907-4cdf-b891-29827c16ccda" 00:04:57.157 ], 00:04:57.157 "product_name": "Malloc disk", 00:04:57.157 "block_size": 512, 00:04:57.157 "num_blocks": 16384, 00:04:57.157 "uuid": "8c91cfec-4907-4cdf-b891-29827c16ccda", 00:04:57.157 "assigned_rate_limits": { 00:04:57.157 "rw_ios_per_sec": 0, 00:04:57.157 "rw_mbytes_per_sec": 0, 00:04:57.157 "r_mbytes_per_sec": 0, 00:04:57.157 "w_mbytes_per_sec": 0 00:04:57.157 }, 00:04:57.157 "claimed": false, 00:04:57.157 "zoned": false, 00:04:57.157 "supported_io_types": { 00:04:57.157 "read": true, 00:04:57.157 "write": true, 00:04:57.157 "unmap": true, 00:04:57.157 "write_zeroes": true, 00:04:57.157 "flush": true, 00:04:57.157 "reset": true, 00:04:57.157 "compare": false, 00:04:57.157 "compare_and_write": false, 00:04:57.157 "abort": true, 00:04:57.157 "nvme_admin": false, 00:04:57.157 "nvme_io": false 00:04:57.157 }, 00:04:57.157 "memory_domains": [ 00:04:57.157 { 00:04:57.157 "dma_device_id": "system", 00:04:57.157 "dma_device_type": 1 00:04:57.157 }, 00:04:57.157 { 00:04:57.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.157 "dma_device_type": 2 00:04:57.157 } 00:04:57.157 ], 00:04:57.157 "driver_specific": {} 00:04:57.157 } 00:04:57.157 ]' 00:04:57.157 20:36:21 -- rpc/rpc.sh@17 -- # jq length 00:04:57.157 20:36:21 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:57.157 20:36:21 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:57.157 20:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.157 20:36:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.157 [2024-04-24 20:36:21.744567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:57.157 [2024-04-24 20:36:21.744610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:57.157 [2024-04-24 20:36:21.744626] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17550b0 00:04:57.157 [2024-04-24 20:36:21.744634] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:57.157 [2024-04-24 20:36:21.746124] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:57.157 [2024-04-24 20:36:21.746153] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:57.157 Passthru0 00:04:57.157 20:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.157 20:36:21 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:57.157 20:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.157 20:36:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.157 20:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.157 20:36:21 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:57.157 { 00:04:57.157 "name": "Malloc0", 00:04:57.157 "aliases": [ 00:04:57.157 "8c91cfec-4907-4cdf-b891-29827c16ccda" 00:04:57.157 ], 00:04:57.157 "product_name": "Malloc disk", 00:04:57.157 "block_size": 512, 00:04:57.157 "num_blocks": 16384, 00:04:57.157 "uuid": "8c91cfec-4907-4cdf-b891-29827c16ccda", 00:04:57.157 "assigned_rate_limits": { 00:04:57.157 "rw_ios_per_sec": 0, 00:04:57.157 "rw_mbytes_per_sec": 0, 00:04:57.157 "r_mbytes_per_sec": 0, 00:04:57.157 "w_mbytes_per_sec": 0 00:04:57.157 }, 00:04:57.157 "claimed": true, 00:04:57.157 "claim_type": "exclusive_write", 00:04:57.157 "zoned": false, 00:04:57.157 "supported_io_types": { 00:04:57.157 "read": true, 00:04:57.157 "write": true, 00:04:57.157 "unmap": true, 00:04:57.157 "write_zeroes": true, 00:04:57.157 "flush": true, 00:04:57.157 "reset": true, 00:04:57.157 "compare": false, 00:04:57.157 "compare_and_write": false, 00:04:57.157 "abort": true, 00:04:57.157 "nvme_admin": false, 00:04:57.157 "nvme_io": false 00:04:57.157 }, 00:04:57.157 "memory_domains": [ 00:04:57.157 { 00:04:57.157 "dma_device_id": "system", 00:04:57.157 "dma_device_type": 1 00:04:57.157 }, 00:04:57.157 { 00:04:57.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.157 "dma_device_type": 2 00:04:57.157 } 00:04:57.157 ], 00:04:57.157 "driver_specific": {} 00:04:57.157 }, 00:04:57.157 { 00:04:57.157 "name": "Passthru0", 00:04:57.157 "aliases": [ 00:04:57.157 "9786de7a-2274-589b-ab91-89380b4ea4bd" 00:04:57.157 ], 00:04:57.157 "product_name": "passthru", 00:04:57.157 "block_size": 512, 00:04:57.157 "num_blocks": 16384, 00:04:57.157 "uuid": "9786de7a-2274-589b-ab91-89380b4ea4bd", 00:04:57.157 "assigned_rate_limits": { 00:04:57.157 "rw_ios_per_sec": 0, 00:04:57.157 "rw_mbytes_per_sec": 0, 00:04:57.157 "r_mbytes_per_sec": 0, 00:04:57.157 "w_mbytes_per_sec": 0 00:04:57.157 }, 00:04:57.157 "claimed": false, 00:04:57.157 "zoned": false, 00:04:57.157 "supported_io_types": { 00:04:57.157 "read": true, 00:04:57.157 "write": true, 00:04:57.157 "unmap": true, 00:04:57.157 "write_zeroes": true, 00:04:57.157 "flush": true, 00:04:57.157 "reset": true, 00:04:57.157 "compare": false, 00:04:57.157 "compare_and_write": false, 00:04:57.157 "abort": true, 00:04:57.157 "nvme_admin": false, 00:04:57.157 "nvme_io": false 00:04:57.157 }, 00:04:57.157 "memory_domains": [ 00:04:57.157 { 00:04:57.157 "dma_device_id": "system", 00:04:57.157 "dma_device_type": 1 00:04:57.157 }, 00:04:57.157 { 00:04:57.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.157 "dma_device_type": 2 00:04:57.157 } 00:04:57.157 ], 00:04:57.157 "driver_specific": { 00:04:57.157 "passthru": { 00:04:57.157 "name": "Passthru0", 00:04:57.157 "base_bdev_name": "Malloc0" 00:04:57.157 } 00:04:57.157 } 00:04:57.157 } 00:04:57.157 ]' 00:04:57.157 20:36:21 -- rpc/rpc.sh@21 -- # jq length 00:04:57.418 20:36:21 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:57.418 20:36:21 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:57.418 20:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.418 20:36:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.418 20:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.418 20:36:21 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:57.418 20:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.418 20:36:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.418 20:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.418 20:36:21 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:57.418 20:36:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.418 20:36:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.418 20:36:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.418 20:36:21 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:57.418 20:36:21 -- rpc/rpc.sh@26 -- # jq length 00:04:57.418 20:36:21 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:57.418 00:04:57.418 real 0m0.272s 00:04:57.418 user 0m0.177s 00:04:57.418 sys 0m0.030s 00:04:57.418 20:36:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.418 20:36:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.418 ************************************ 00:04:57.418 END TEST rpc_integrity 00:04:57.418 ************************************ 00:04:57.418 20:36:21 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:57.418 20:36:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.418 20:36:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.418 20:36:21 -- common/autotest_common.sh@10 -- # set +x 00:04:57.678 ************************************ 00:04:57.678 START TEST rpc_plugins 00:04:57.678 ************************************ 00:04:57.678 20:36:22 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:57.678 20:36:22 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:57.678 20:36:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.678 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:57.678 20:36:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.678 20:36:22 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:57.678 20:36:22 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:57.678 20:36:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.678 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:57.678 20:36:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.678 20:36:22 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:57.678 { 00:04:57.678 "name": "Malloc1", 00:04:57.678 "aliases": [ 00:04:57.678 "21f6e2f3-fd62-4103-bda7-b4c44d5b7e8a" 00:04:57.678 ], 00:04:57.678 "product_name": "Malloc disk", 00:04:57.678 "block_size": 4096, 00:04:57.678 "num_blocks": 256, 00:04:57.678 "uuid": "21f6e2f3-fd62-4103-bda7-b4c44d5b7e8a", 00:04:57.678 "assigned_rate_limits": { 00:04:57.678 "rw_ios_per_sec": 0, 00:04:57.678 "rw_mbytes_per_sec": 0, 00:04:57.678 "r_mbytes_per_sec": 0, 00:04:57.678 "w_mbytes_per_sec": 0 00:04:57.678 }, 00:04:57.678 "claimed": false, 00:04:57.678 "zoned": false, 00:04:57.678 "supported_io_types": { 00:04:57.678 "read": true, 00:04:57.678 "write": true, 00:04:57.678 "unmap": true, 00:04:57.678 "write_zeroes": true, 00:04:57.678 "flush": true, 00:04:57.678 "reset": true, 00:04:57.678 "compare": false, 00:04:57.678 "compare_and_write": false, 00:04:57.678 "abort": true, 00:04:57.678 "nvme_admin": false, 00:04:57.678 "nvme_io": false 00:04:57.678 }, 00:04:57.678 "memory_domains": [ 00:04:57.678 { 00:04:57.678 "dma_device_id": "system", 00:04:57.678 "dma_device_type": 1 00:04:57.678 }, 00:04:57.678 { 00:04:57.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.678 "dma_device_type": 2 00:04:57.678 } 00:04:57.678 ], 00:04:57.678 "driver_specific": {} 00:04:57.678 } 00:04:57.678 ]' 00:04:57.678 20:36:22 -- rpc/rpc.sh@32 -- # jq length 00:04:57.678 20:36:22 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:57.678 20:36:22 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:57.678 20:36:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.678 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:57.678 20:36:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.678 20:36:22 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:57.678 20:36:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.678 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:57.678 20:36:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.678 20:36:22 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:57.678 20:36:22 -- rpc/rpc.sh@36 -- # jq length 00:04:57.678 20:36:22 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:57.678 00:04:57.678 real 0m0.151s 00:04:57.678 user 0m0.102s 00:04:57.678 sys 0m0.013s 00:04:57.678 20:36:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:57.678 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:57.678 ************************************ 00:04:57.678 END TEST rpc_plugins 00:04:57.678 ************************************ 00:04:57.678 20:36:22 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:57.678 20:36:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.678 20:36:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.678 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:57.939 ************************************ 00:04:57.939 START TEST rpc_trace_cmd_test 00:04:57.939 ************************************ 00:04:57.939 20:36:22 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:57.939 20:36:22 -- rpc/rpc.sh@40 -- # local info 00:04:57.939 20:36:22 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:57.939 20:36:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:57.939 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:57.939 20:36:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:57.939 20:36:22 -- rpc/rpc.sh@42 -- # info='{ 00:04:57.939 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2567545", 00:04:57.939 "tpoint_group_mask": "0x8", 00:04:57.939 "iscsi_conn": { 00:04:57.939 "mask": "0x2", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 }, 00:04:57.939 "scsi": { 00:04:57.939 "mask": "0x4", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 }, 00:04:57.939 "bdev": { 00:04:57.939 "mask": "0x8", 00:04:57.939 "tpoint_mask": "0xffffffffffffffff" 00:04:57.939 }, 00:04:57.939 "nvmf_rdma": { 00:04:57.939 "mask": "0x10", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 }, 00:04:57.939 "nvmf_tcp": { 00:04:57.939 "mask": "0x20", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 }, 00:04:57.939 "ftl": { 00:04:57.939 "mask": "0x40", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 }, 00:04:57.939 "blobfs": { 00:04:57.939 "mask": "0x80", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 }, 00:04:57.939 "dsa": { 00:04:57.939 "mask": "0x200", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 }, 00:04:57.939 "thread": { 00:04:57.939 "mask": "0x400", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 }, 00:04:57.939 "nvme_pcie": { 00:04:57.939 "mask": "0x800", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 }, 00:04:57.939 "iaa": { 00:04:57.939 "mask": "0x1000", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 }, 00:04:57.939 "nvme_tcp": { 00:04:57.939 "mask": "0x2000", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 }, 00:04:57.939 "bdev_nvme": { 00:04:57.939 "mask": "0x4000", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 }, 00:04:57.939 "sock": { 00:04:57.939 "mask": "0x8000", 00:04:57.939 "tpoint_mask": "0x0" 00:04:57.939 } 00:04:57.939 }' 00:04:57.939 20:36:22 -- rpc/rpc.sh@43 -- # jq length 00:04:57.939 20:36:22 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:57.939 20:36:22 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:57.939 20:36:22 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:57.939 20:36:22 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:58.199 20:36:22 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:58.199 20:36:22 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:58.199 20:36:22 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:58.199 20:36:22 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:58.199 20:36:22 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:58.199 00:04:58.199 real 0m0.227s 00:04:58.199 user 0m0.190s 00:04:58.200 sys 0m0.028s 00:04:58.200 20:36:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.200 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:58.200 ************************************ 00:04:58.200 END TEST rpc_trace_cmd_test 00:04:58.200 ************************************ 00:04:58.200 20:36:22 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:58.200 20:36:22 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:58.200 20:36:22 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:58.200 20:36:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.200 20:36:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.200 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:58.460 ************************************ 00:04:58.460 START TEST rpc_daemon_integrity 00:04:58.460 ************************************ 00:04:58.460 20:36:22 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:58.460 20:36:22 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.460 20:36:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.460 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:58.460 20:36:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.460 20:36:22 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.460 20:36:22 -- rpc/rpc.sh@13 -- # jq length 00:04:58.460 20:36:22 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.460 20:36:22 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.460 20:36:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.460 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:58.460 20:36:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.460 20:36:22 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:58.460 20:36:22 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.460 20:36:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.460 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:58.460 20:36:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.460 20:36:22 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.460 { 00:04:58.460 "name": "Malloc2", 00:04:58.460 "aliases": [ 00:04:58.460 "7100ece4-a1ba-48f3-83a4-25294e28fb52" 00:04:58.460 ], 00:04:58.460 "product_name": "Malloc disk", 00:04:58.460 "block_size": 512, 00:04:58.460 "num_blocks": 16384, 00:04:58.460 "uuid": "7100ece4-a1ba-48f3-83a4-25294e28fb52", 00:04:58.460 "assigned_rate_limits": { 00:04:58.460 "rw_ios_per_sec": 0, 00:04:58.460 "rw_mbytes_per_sec": 0, 00:04:58.460 "r_mbytes_per_sec": 0, 00:04:58.460 "w_mbytes_per_sec": 0 00:04:58.460 }, 00:04:58.460 "claimed": false, 00:04:58.460 "zoned": false, 00:04:58.460 "supported_io_types": { 00:04:58.460 "read": true, 00:04:58.460 "write": true, 00:04:58.460 "unmap": true, 00:04:58.460 "write_zeroes": true, 00:04:58.460 "flush": true, 00:04:58.460 "reset": true, 00:04:58.460 "compare": false, 00:04:58.460 "compare_and_write": false, 00:04:58.460 "abort": true, 00:04:58.460 "nvme_admin": false, 00:04:58.460 "nvme_io": false 00:04:58.460 }, 00:04:58.460 "memory_domains": [ 00:04:58.460 { 00:04:58.460 "dma_device_id": "system", 00:04:58.460 "dma_device_type": 1 00:04:58.460 }, 00:04:58.460 { 00:04:58.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.460 "dma_device_type": 2 00:04:58.460 } 00:04:58.460 ], 00:04:58.460 "driver_specific": {} 00:04:58.460 } 00:04:58.460 ]' 00:04:58.460 20:36:22 -- rpc/rpc.sh@17 -- # jq length 00:04:58.460 20:36:22 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.460 20:36:22 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:58.460 20:36:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.460 20:36:22 -- common/autotest_common.sh@10 -- # set +x 00:04:58.460 [2024-04-24 20:36:23.004119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:58.460 [2024-04-24 20:36:23.004162] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.460 [2024-04-24 20:36:23.004178] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18fa1f0 00:04:58.460 [2024-04-24 20:36:23.004192] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.460 [2024-04-24 20:36:23.005593] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.460 [2024-04-24 20:36:23.005626] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.460 Passthru0 00:04:58.460 20:36:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.460 20:36:23 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.460 20:36:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.460 20:36:23 -- common/autotest_common.sh@10 -- # set +x 00:04:58.460 20:36:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.460 20:36:23 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.460 { 00:04:58.460 "name": "Malloc2", 00:04:58.460 "aliases": [ 00:04:58.460 "7100ece4-a1ba-48f3-83a4-25294e28fb52" 00:04:58.460 ], 00:04:58.460 "product_name": "Malloc disk", 00:04:58.460 "block_size": 512, 00:04:58.460 "num_blocks": 16384, 00:04:58.460 "uuid": "7100ece4-a1ba-48f3-83a4-25294e28fb52", 00:04:58.460 "assigned_rate_limits": { 00:04:58.460 "rw_ios_per_sec": 0, 00:04:58.460 "rw_mbytes_per_sec": 0, 00:04:58.460 "r_mbytes_per_sec": 0, 00:04:58.460 "w_mbytes_per_sec": 0 00:04:58.460 }, 00:04:58.460 "claimed": true, 00:04:58.460 "claim_type": "exclusive_write", 00:04:58.460 "zoned": false, 00:04:58.460 "supported_io_types": { 00:04:58.460 "read": true, 00:04:58.460 "write": true, 00:04:58.460 "unmap": true, 00:04:58.460 "write_zeroes": true, 00:04:58.460 "flush": true, 00:04:58.460 "reset": true, 00:04:58.461 "compare": false, 00:04:58.461 "compare_and_write": false, 00:04:58.461 "abort": true, 00:04:58.461 "nvme_admin": false, 00:04:58.461 "nvme_io": false 00:04:58.461 }, 00:04:58.461 "memory_domains": [ 00:04:58.461 { 00:04:58.461 "dma_device_id": "system", 00:04:58.461 "dma_device_type": 1 00:04:58.461 }, 00:04:58.461 { 00:04:58.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.461 "dma_device_type": 2 00:04:58.461 } 00:04:58.461 ], 00:04:58.461 "driver_specific": {} 00:04:58.461 }, 00:04:58.461 { 00:04:58.461 "name": "Passthru0", 00:04:58.461 "aliases": [ 00:04:58.461 "f939a673-e77a-589b-afa5-e4cfbf00b5f4" 00:04:58.461 ], 00:04:58.461 "product_name": "passthru", 00:04:58.461 "block_size": 512, 00:04:58.461 "num_blocks": 16384, 00:04:58.461 "uuid": "f939a673-e77a-589b-afa5-e4cfbf00b5f4", 00:04:58.461 "assigned_rate_limits": { 00:04:58.461 "rw_ios_per_sec": 0, 00:04:58.461 "rw_mbytes_per_sec": 0, 00:04:58.461 "r_mbytes_per_sec": 0, 00:04:58.461 "w_mbytes_per_sec": 0 00:04:58.461 }, 00:04:58.461 "claimed": false, 00:04:58.461 "zoned": false, 00:04:58.461 "supported_io_types": { 00:04:58.461 "read": true, 00:04:58.461 "write": true, 00:04:58.461 "unmap": true, 00:04:58.461 "write_zeroes": true, 00:04:58.461 "flush": true, 00:04:58.461 "reset": true, 00:04:58.461 "compare": false, 00:04:58.461 "compare_and_write": false, 00:04:58.461 "abort": true, 00:04:58.461 "nvme_admin": false, 00:04:58.461 "nvme_io": false 00:04:58.461 }, 00:04:58.461 "memory_domains": [ 00:04:58.461 { 00:04:58.461 "dma_device_id": "system", 00:04:58.461 "dma_device_type": 1 00:04:58.461 }, 00:04:58.461 { 00:04:58.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.461 "dma_device_type": 2 00:04:58.461 } 00:04:58.461 ], 00:04:58.461 "driver_specific": { 00:04:58.461 "passthru": { 00:04:58.461 "name": "Passthru0", 00:04:58.461 "base_bdev_name": "Malloc2" 00:04:58.461 } 00:04:58.461 } 00:04:58.461 } 00:04:58.461 ]' 00:04:58.461 20:36:23 -- rpc/rpc.sh@21 -- # jq length 00:04:58.461 20:36:23 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.461 20:36:23 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.461 20:36:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.461 20:36:23 -- common/autotest_common.sh@10 -- # set +x 00:04:58.461 20:36:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.461 20:36:23 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:58.461 20:36:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.461 20:36:23 -- common/autotest_common.sh@10 -- # set +x 00:04:58.461 20:36:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.461 20:36:23 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.461 20:36:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:58.461 20:36:23 -- common/autotest_common.sh@10 -- # set +x 00:04:58.720 20:36:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:58.720 20:36:23 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.720 20:36:23 -- rpc/rpc.sh@26 -- # jq length 00:04:58.720 20:36:23 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.720 00:04:58.720 real 0m0.280s 00:04:58.720 user 0m0.173s 00:04:58.720 sys 0m0.041s 00:04:58.720 20:36:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.720 20:36:23 -- common/autotest_common.sh@10 -- # set +x 00:04:58.720 ************************************ 00:04:58.720 END TEST rpc_daemon_integrity 00:04:58.720 ************************************ 00:04:58.720 20:36:23 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:58.720 20:36:23 -- rpc/rpc.sh@84 -- # killprocess 2567545 00:04:58.720 20:36:23 -- common/autotest_common.sh@936 -- # '[' -z 2567545 ']' 00:04:58.720 20:36:23 -- common/autotest_common.sh@940 -- # kill -0 2567545 00:04:58.720 20:36:23 -- common/autotest_common.sh@941 -- # uname 00:04:58.720 20:36:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.720 20:36:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2567545 00:04:58.720 20:36:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:58.720 20:36:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:58.720 20:36:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2567545' 00:04:58.720 killing process with pid 2567545 00:04:58.720 20:36:23 -- common/autotest_common.sh@955 -- # kill 2567545 00:04:58.720 20:36:23 -- common/autotest_common.sh@960 -- # wait 2567545 00:04:58.980 00:04:58.980 real 0m3.031s 00:04:58.980 user 0m3.996s 00:04:58.980 sys 0m0.953s 00:04:58.980 20:36:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:58.980 20:36:23 -- common/autotest_common.sh@10 -- # set +x 00:04:58.980 ************************************ 00:04:58.980 END TEST rpc 00:04:58.980 ************************************ 00:04:58.980 20:36:23 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:58.980 20:36:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:58.980 20:36:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:58.980 20:36:23 -- common/autotest_common.sh@10 -- # set +x 00:04:59.240 ************************************ 00:04:59.240 START TEST skip_rpc 00:04:59.240 ************************************ 00:04:59.240 20:36:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:59.240 * Looking for test storage... 00:04:59.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:59.240 20:36:23 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.240 20:36:23 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:59.240 20:36:23 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:59.240 20:36:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:59.240 20:36:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:59.240 20:36:23 -- common/autotest_common.sh@10 -- # set +x 00:04:59.501 ************************************ 00:04:59.501 START TEST skip_rpc 00:04:59.501 ************************************ 00:04:59.501 20:36:23 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:59.501 20:36:23 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2568425 00:04:59.501 20:36:23 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.501 20:36:23 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:59.501 20:36:23 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:59.501 [2024-04-24 20:36:23.976949] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:04:59.501 [2024-04-24 20:36:23.977006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568425 ] 00:04:59.501 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.501 [2024-04-24 20:36:24.058185] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.762 [2024-04-24 20:36:24.151676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.052 20:36:28 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:05.052 20:36:28 -- common/autotest_common.sh@638 -- # local es=0 00:05:05.052 20:36:28 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:05.052 20:36:28 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:05:05.052 20:36:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:05.053 20:36:28 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:05:05.053 20:36:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:05.053 20:36:28 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:05:05.053 20:36:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:05.053 20:36:28 -- common/autotest_common.sh@10 -- # set +x 00:05:05.053 20:36:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:05.053 20:36:28 -- common/autotest_common.sh@641 -- # es=1 00:05:05.053 20:36:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:05.053 20:36:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:05.053 20:36:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:05.053 20:36:28 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:05.053 20:36:28 -- rpc/skip_rpc.sh@23 -- # killprocess 2568425 00:05:05.053 20:36:28 -- common/autotest_common.sh@936 -- # '[' -z 2568425 ']' 00:05:05.053 20:36:28 -- common/autotest_common.sh@940 -- # kill -0 2568425 00:05:05.053 20:36:28 -- common/autotest_common.sh@941 -- # uname 00:05:05.053 20:36:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.053 20:36:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2568425 00:05:05.053 20:36:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:05.053 20:36:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:05.053 20:36:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2568425' 00:05:05.053 killing process with pid 2568425 00:05:05.053 20:36:28 -- common/autotest_common.sh@955 -- # kill 2568425 00:05:05.053 20:36:28 -- common/autotest_common.sh@960 -- # wait 2568425 00:05:05.053 00:05:05.053 real 0m5.275s 00:05:05.053 user 0m5.022s 00:05:05.053 sys 0m0.280s 00:05:05.053 20:36:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:05.053 20:36:29 -- common/autotest_common.sh@10 -- # set +x 00:05:05.053 ************************************ 00:05:05.053 END TEST skip_rpc 00:05:05.053 ************************************ 00:05:05.053 20:36:29 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:05.053 20:36:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.053 20:36:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.053 20:36:29 -- common/autotest_common.sh@10 -- # set +x 00:05:05.053 ************************************ 00:05:05.053 START TEST skip_rpc_with_json 00:05:05.053 ************************************ 00:05:05.053 20:36:29 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:05:05.053 20:36:29 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:05.053 20:36:29 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2569560 00:05:05.053 20:36:29 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.053 20:36:29 -- rpc/skip_rpc.sh@31 -- # waitforlisten 2569560 00:05:05.053 20:36:29 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.053 20:36:29 -- common/autotest_common.sh@817 -- # '[' -z 2569560 ']' 00:05:05.053 20:36:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.053 20:36:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:05.053 20:36:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.053 20:36:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:05.053 20:36:29 -- common/autotest_common.sh@10 -- # set +x 00:05:05.053 [2024-04-24 20:36:29.449638] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:05.053 [2024-04-24 20:36:29.449690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569560 ] 00:05:05.053 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.053 [2024-04-24 20:36:29.527616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.053 [2024-04-24 20:36:29.596961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.623 20:36:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:05.623 20:36:30 -- common/autotest_common.sh@850 -- # return 0 00:05:05.623 20:36:30 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:05.623 20:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:05.623 20:36:30 -- common/autotest_common.sh@10 -- # set +x 00:05:05.884 [2024-04-24 20:36:30.266077] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:05.884 request: 00:05:05.884 { 00:05:05.884 "trtype": "tcp", 00:05:05.884 "method": "nvmf_get_transports", 00:05:05.884 "req_id": 1 00:05:05.884 } 00:05:05.884 Got JSON-RPC error response 00:05:05.884 response: 00:05:05.884 { 00:05:05.884 "code": -19, 00:05:05.884 "message": "No such device" 00:05:05.884 } 00:05:05.884 20:36:30 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:05:05.884 20:36:30 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:05.884 20:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:05.884 20:36:30 -- common/autotest_common.sh@10 -- # set +x 00:05:05.884 [2024-04-24 20:36:30.278188] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:05.884 20:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:05.884 20:36:30 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:05.884 20:36:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:05.884 20:36:30 -- common/autotest_common.sh@10 -- # set +x 00:05:05.884 20:36:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:05.884 20:36:30 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:05.884 { 00:05:05.884 "subsystems": [ 00:05:05.884 { 00:05:05.884 "subsystem": "vfio_user_target", 00:05:05.884 "config": null 00:05:05.884 }, 00:05:05.884 { 00:05:05.884 "subsystem": "keyring", 00:05:05.884 "config": [] 00:05:05.884 }, 00:05:05.884 { 00:05:05.884 "subsystem": "iobuf", 00:05:05.884 "config": [ 00:05:05.884 { 00:05:05.884 "method": "iobuf_set_options", 00:05:05.884 "params": { 00:05:05.884 "small_pool_count": 8192, 00:05:05.884 "large_pool_count": 1024, 00:05:05.884 "small_bufsize": 8192, 00:05:05.884 "large_bufsize": 135168 00:05:05.884 } 00:05:05.884 } 00:05:05.884 ] 00:05:05.884 }, 00:05:05.884 { 00:05:05.884 "subsystem": "sock", 00:05:05.884 "config": [ 00:05:05.884 { 00:05:05.884 "method": "sock_impl_set_options", 00:05:05.884 "params": { 00:05:05.884 "impl_name": "posix", 00:05:05.884 "recv_buf_size": 2097152, 00:05:05.884 "send_buf_size": 2097152, 00:05:05.884 "enable_recv_pipe": true, 00:05:05.884 "enable_quickack": false, 00:05:05.884 "enable_placement_id": 0, 00:05:05.884 "enable_zerocopy_send_server": true, 00:05:05.884 "enable_zerocopy_send_client": false, 00:05:05.884 "zerocopy_threshold": 0, 00:05:05.884 "tls_version": 0, 00:05:05.884 "enable_ktls": false 00:05:05.884 } 00:05:05.884 }, 00:05:05.884 { 00:05:05.884 "method": "sock_impl_set_options", 00:05:05.884 "params": { 00:05:05.884 "impl_name": "ssl", 00:05:05.884 "recv_buf_size": 4096, 00:05:05.884 "send_buf_size": 4096, 00:05:05.884 "enable_recv_pipe": true, 00:05:05.884 "enable_quickack": false, 00:05:05.884 "enable_placement_id": 0, 00:05:05.884 "enable_zerocopy_send_server": true, 00:05:05.884 "enable_zerocopy_send_client": false, 00:05:05.884 "zerocopy_threshold": 0, 00:05:05.884 "tls_version": 0, 00:05:05.884 "enable_ktls": false 00:05:05.884 } 00:05:05.884 } 00:05:05.884 ] 00:05:05.884 }, 00:05:05.884 { 00:05:05.884 "subsystem": "vmd", 00:05:05.884 "config": [] 00:05:05.884 }, 00:05:05.884 { 00:05:05.884 "subsystem": "accel", 00:05:05.884 "config": [ 00:05:05.884 { 00:05:05.884 "method": "accel_set_options", 00:05:05.884 "params": { 00:05:05.884 "small_cache_size": 128, 00:05:05.884 "large_cache_size": 16, 00:05:05.884 "task_count": 2048, 00:05:05.884 "sequence_count": 2048, 00:05:05.884 "buf_count": 2048 00:05:05.884 } 00:05:05.884 } 00:05:05.884 ] 00:05:05.884 }, 00:05:05.884 { 00:05:05.884 "subsystem": "bdev", 00:05:05.884 "config": [ 00:05:05.884 { 00:05:05.884 "method": "bdev_set_options", 00:05:05.884 "params": { 00:05:05.884 "bdev_io_pool_size": 65535, 00:05:05.884 "bdev_io_cache_size": 256, 00:05:05.884 "bdev_auto_examine": true, 00:05:05.884 "iobuf_small_cache_size": 128, 00:05:05.884 "iobuf_large_cache_size": 16 00:05:05.884 } 00:05:05.884 }, 00:05:05.884 { 00:05:05.884 "method": "bdev_raid_set_options", 00:05:05.884 "params": { 00:05:05.884 "process_window_size_kb": 1024 00:05:05.884 } 00:05:05.884 }, 00:05:05.884 { 00:05:05.884 "method": "bdev_iscsi_set_options", 00:05:05.884 "params": { 00:05:05.884 "timeout_sec": 30 00:05:05.884 } 00:05:05.884 }, 00:05:05.884 { 00:05:05.884 "method": "bdev_nvme_set_options", 00:05:05.884 "params": { 00:05:05.884 "action_on_timeout": "none", 00:05:05.884 "timeout_us": 0, 00:05:05.884 "timeout_admin_us": 0, 00:05:05.884 "keep_alive_timeout_ms": 10000, 00:05:05.884 "arbitration_burst": 0, 00:05:05.884 "low_priority_weight": 0, 00:05:05.884 "medium_priority_weight": 0, 00:05:05.884 "high_priority_weight": 0, 00:05:05.884 "nvme_adminq_poll_period_us": 10000, 00:05:05.884 "nvme_ioq_poll_period_us": 0, 00:05:05.884 "io_queue_requests": 0, 00:05:05.884 "delay_cmd_submit": true, 00:05:05.884 "transport_retry_count": 4, 00:05:05.884 "bdev_retry_count": 3, 00:05:05.884 "transport_ack_timeout": 0, 00:05:05.884 "ctrlr_loss_timeout_sec": 0, 00:05:05.884 "reconnect_delay_sec": 0, 00:05:05.884 "fast_io_fail_timeout_sec": 0, 00:05:05.884 "disable_auto_failback": false, 00:05:05.884 "generate_uuids": false, 00:05:05.884 "transport_tos": 0, 00:05:05.884 "nvme_error_stat": false, 00:05:05.884 "rdma_srq_size": 0, 00:05:05.884 "io_path_stat": false, 00:05:05.884 "allow_accel_sequence": false, 00:05:05.884 "rdma_max_cq_size": 0, 00:05:05.884 "rdma_cm_event_timeout_ms": 0, 00:05:05.884 "dhchap_digests": [ 00:05:05.885 "sha256", 00:05:05.885 "sha384", 00:05:05.885 "sha512" 00:05:05.885 ], 00:05:05.885 "dhchap_dhgroups": [ 00:05:05.885 "null", 00:05:05.885 "ffdhe2048", 00:05:05.885 "ffdhe3072", 00:05:05.885 "ffdhe4096", 00:05:05.885 "ffdhe6144", 00:05:05.885 "ffdhe8192" 00:05:05.885 ] 00:05:05.885 } 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "method": "bdev_nvme_set_hotplug", 00:05:05.885 "params": { 00:05:05.885 "period_us": 100000, 00:05:05.885 "enable": false 00:05:05.885 } 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "method": "bdev_wait_for_examine" 00:05:05.885 } 00:05:05.885 ] 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "subsystem": "scsi", 00:05:05.885 "config": null 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "subsystem": "scheduler", 00:05:05.885 "config": [ 00:05:05.885 { 00:05:05.885 "method": "framework_set_scheduler", 00:05:05.885 "params": { 00:05:05.885 "name": "static" 00:05:05.885 } 00:05:05.885 } 00:05:05.885 ] 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "subsystem": "vhost_scsi", 00:05:05.885 "config": [] 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "subsystem": "vhost_blk", 00:05:05.885 "config": [] 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "subsystem": "ublk", 00:05:05.885 "config": [] 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "subsystem": "nbd", 00:05:05.885 "config": [] 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "subsystem": "nvmf", 00:05:05.885 "config": [ 00:05:05.885 { 00:05:05.885 "method": "nvmf_set_config", 00:05:05.885 "params": { 00:05:05.885 "discovery_filter": "match_any", 00:05:05.885 "admin_cmd_passthru": { 00:05:05.885 "identify_ctrlr": false 00:05:05.885 } 00:05:05.885 } 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "method": "nvmf_set_max_subsystems", 00:05:05.885 "params": { 00:05:05.885 "max_subsystems": 1024 00:05:05.885 } 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "method": "nvmf_set_crdt", 00:05:05.885 "params": { 00:05:05.885 "crdt1": 0, 00:05:05.885 "crdt2": 0, 00:05:05.885 "crdt3": 0 00:05:05.885 } 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "method": "nvmf_create_transport", 00:05:05.885 "params": { 00:05:05.885 "trtype": "TCP", 00:05:05.885 "max_queue_depth": 128, 00:05:05.885 "max_io_qpairs_per_ctrlr": 127, 00:05:05.885 "in_capsule_data_size": 4096, 00:05:05.885 "max_io_size": 131072, 00:05:05.885 "io_unit_size": 131072, 00:05:05.885 "max_aq_depth": 128, 00:05:05.885 "num_shared_buffers": 511, 00:05:05.885 "buf_cache_size": 4294967295, 00:05:05.885 "dif_insert_or_strip": false, 00:05:05.885 "zcopy": false, 00:05:05.885 "c2h_success": true, 00:05:05.885 "sock_priority": 0, 00:05:05.885 "abort_timeout_sec": 1, 00:05:05.885 "ack_timeout": 0, 00:05:05.885 "data_wr_pool_size": 0 00:05:05.885 } 00:05:05.885 } 00:05:05.885 ] 00:05:05.885 }, 00:05:05.885 { 00:05:05.885 "subsystem": "iscsi", 00:05:05.885 "config": [ 00:05:05.885 { 00:05:05.885 "method": "iscsi_set_options", 00:05:05.885 "params": { 00:05:05.885 "node_base": "iqn.2016-06.io.spdk", 00:05:05.885 "max_sessions": 128, 00:05:05.885 "max_connections_per_session": 2, 00:05:05.885 "max_queue_depth": 64, 00:05:05.885 "default_time2wait": 2, 00:05:05.885 "default_time2retain": 20, 00:05:05.885 "first_burst_length": 8192, 00:05:05.885 "immediate_data": true, 00:05:05.885 "allow_duplicated_isid": false, 00:05:05.885 "error_recovery_level": 0, 00:05:05.885 "nop_timeout": 60, 00:05:05.885 "nop_in_interval": 30, 00:05:05.885 "disable_chap": false, 00:05:05.885 "require_chap": false, 00:05:05.885 "mutual_chap": false, 00:05:05.885 "chap_group": 0, 00:05:05.885 "max_large_datain_per_connection": 64, 00:05:05.885 "max_r2t_per_connection": 4, 00:05:05.885 "pdu_pool_size": 36864, 00:05:05.885 "immediate_data_pool_size": 16384, 00:05:05.885 "data_out_pool_size": 2048 00:05:05.885 } 00:05:05.885 } 00:05:05.885 ] 00:05:05.885 } 00:05:05.885 ] 00:05:05.885 } 00:05:05.885 20:36:30 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:05.885 20:36:30 -- rpc/skip_rpc.sh@40 -- # killprocess 2569560 00:05:05.885 20:36:30 -- common/autotest_common.sh@936 -- # '[' -z 2569560 ']' 00:05:05.885 20:36:30 -- common/autotest_common.sh@940 -- # kill -0 2569560 00:05:05.885 20:36:30 -- common/autotest_common.sh@941 -- # uname 00:05:05.885 20:36:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.885 20:36:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2569560 00:05:05.885 20:36:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:05.885 20:36:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:05.885 20:36:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2569560' 00:05:05.885 killing process with pid 2569560 00:05:05.885 20:36:30 -- common/autotest_common.sh@955 -- # kill 2569560 00:05:05.885 20:36:30 -- common/autotest_common.sh@960 -- # wait 2569560 00:05:06.146 20:36:30 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2569820 00:05:06.146 20:36:30 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:06.146 20:36:30 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.436 20:36:35 -- rpc/skip_rpc.sh@50 -- # killprocess 2569820 00:05:11.436 20:36:35 -- common/autotest_common.sh@936 -- # '[' -z 2569820 ']' 00:05:11.436 20:36:35 -- common/autotest_common.sh@940 -- # kill -0 2569820 00:05:11.436 20:36:35 -- common/autotest_common.sh@941 -- # uname 00:05:11.436 20:36:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:11.436 20:36:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2569820 00:05:11.436 20:36:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:11.436 20:36:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:11.436 20:36:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2569820' 00:05:11.436 killing process with pid 2569820 00:05:11.436 20:36:35 -- common/autotest_common.sh@955 -- # kill 2569820 00:05:11.436 20:36:35 -- common/autotest_common.sh@960 -- # wait 2569820 00:05:11.436 20:36:35 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:11.436 20:36:35 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:11.436 00:05:11.436 real 0m6.600s 00:05:11.436 user 0m6.504s 00:05:11.436 sys 0m0.562s 00:05:11.436 20:36:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.436 20:36:35 -- common/autotest_common.sh@10 -- # set +x 00:05:11.436 ************************************ 00:05:11.436 END TEST skip_rpc_with_json 00:05:11.436 ************************************ 00:05:11.436 20:36:36 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:11.436 20:36:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.436 20:36:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.436 20:36:36 -- common/autotest_common.sh@10 -- # set +x 00:05:11.697 ************************************ 00:05:11.697 START TEST skip_rpc_with_delay 00:05:11.697 ************************************ 00:05:11.697 20:36:36 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:05:11.697 20:36:36 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:11.697 20:36:36 -- common/autotest_common.sh@638 -- # local es=0 00:05:11.697 20:36:36 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:11.697 20:36:36 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.697 20:36:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:11.697 20:36:36 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.697 20:36:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:11.697 20:36:36 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.697 20:36:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:11.697 20:36:36 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.697 20:36:36 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:11.697 20:36:36 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:11.697 [2024-04-24 20:36:36.224769] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:11.698 [2024-04-24 20:36:36.224864] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:11.698 20:36:36 -- common/autotest_common.sh@641 -- # es=1 00:05:11.698 20:36:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:11.698 20:36:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:11.698 20:36:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:11.698 00:05:11.698 real 0m0.076s 00:05:11.698 user 0m0.048s 00:05:11.698 sys 0m0.027s 00:05:11.698 20:36:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.698 20:36:36 -- common/autotest_common.sh@10 -- # set +x 00:05:11.698 ************************************ 00:05:11.698 END TEST skip_rpc_with_delay 00:05:11.698 ************************************ 00:05:11.698 20:36:36 -- rpc/skip_rpc.sh@77 -- # uname 00:05:11.698 20:36:36 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:11.698 20:36:36 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:11.698 20:36:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.698 20:36:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.698 20:36:36 -- common/autotest_common.sh@10 -- # set +x 00:05:11.958 ************************************ 00:05:11.958 START TEST exit_on_failed_rpc_init 00:05:11.958 ************************************ 00:05:11.958 20:36:36 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:05:11.958 20:36:36 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2571122 00:05:11.958 20:36:36 -- rpc/skip_rpc.sh@63 -- # waitforlisten 2571122 00:05:11.958 20:36:36 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.958 20:36:36 -- common/autotest_common.sh@817 -- # '[' -z 2571122 ']' 00:05:11.958 20:36:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.958 20:36:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:11.958 20:36:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.958 20:36:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:11.958 20:36:36 -- common/autotest_common.sh@10 -- # set +x 00:05:11.958 [2024-04-24 20:36:36.497955] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:11.958 [2024-04-24 20:36:36.498004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571122 ] 00:05:11.958 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.958 [2024-04-24 20:36:36.574314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.218 [2024-04-24 20:36:36.641599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.788 20:36:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:12.788 20:36:37 -- common/autotest_common.sh@850 -- # return 0 00:05:12.788 20:36:37 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.788 20:36:37 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:12.788 20:36:37 -- common/autotest_common.sh@638 -- # local es=0 00:05:12.788 20:36:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:12.788 20:36:37 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.788 20:36:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:12.788 20:36:37 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.788 20:36:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:12.788 20:36:37 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.788 20:36:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:12.788 20:36:37 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:12.788 20:36:37 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:12.788 20:36:37 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:12.788 [2024-04-24 20:36:37.408893] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:12.788 [2024-04-24 20:36:37.408945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571227 ] 00:05:13.048 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.048 [2024-04-24 20:36:37.467203] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.048 [2024-04-24 20:36:37.529673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.048 [2024-04-24 20:36:37.529739] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:13.048 [2024-04-24 20:36:37.529749] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:13.048 [2024-04-24 20:36:37.529756] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:13.048 20:36:37 -- common/autotest_common.sh@641 -- # es=234 00:05:13.048 20:36:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:13.048 20:36:37 -- common/autotest_common.sh@650 -- # es=106 00:05:13.048 20:36:37 -- common/autotest_common.sh@651 -- # case "$es" in 00:05:13.048 20:36:37 -- common/autotest_common.sh@658 -- # es=1 00:05:13.048 20:36:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:13.048 20:36:37 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:13.048 20:36:37 -- rpc/skip_rpc.sh@70 -- # killprocess 2571122 00:05:13.048 20:36:37 -- common/autotest_common.sh@936 -- # '[' -z 2571122 ']' 00:05:13.048 20:36:37 -- common/autotest_common.sh@940 -- # kill -0 2571122 00:05:13.048 20:36:37 -- common/autotest_common.sh@941 -- # uname 00:05:13.048 20:36:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:13.048 20:36:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2571122 00:05:13.048 20:36:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:13.048 20:36:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:13.048 20:36:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2571122' 00:05:13.048 killing process with pid 2571122 00:05:13.048 20:36:37 -- common/autotest_common.sh@955 -- # kill 2571122 00:05:13.048 20:36:37 -- common/autotest_common.sh@960 -- # wait 2571122 00:05:13.309 00:05:13.309 real 0m1.410s 00:05:13.309 user 0m1.711s 00:05:13.309 sys 0m0.372s 00:05:13.309 20:36:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.309 20:36:37 -- common/autotest_common.sh@10 -- # set +x 00:05:13.309 ************************************ 00:05:13.309 END TEST exit_on_failed_rpc_init 00:05:13.309 ************************************ 00:05:13.309 20:36:37 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:13.309 00:05:13.309 real 0m14.235s 00:05:13.309 user 0m13.615s 00:05:13.309 sys 0m1.735s 00:05:13.309 20:36:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.309 20:36:37 -- common/autotest_common.sh@10 -- # set +x 00:05:13.309 ************************************ 00:05:13.309 END TEST skip_rpc 00:05:13.309 ************************************ 00:05:13.309 20:36:37 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:13.309 20:36:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.309 20:36:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.309 20:36:37 -- common/autotest_common.sh@10 -- # set +x 00:05:13.570 ************************************ 00:05:13.570 START TEST rpc_client 00:05:13.570 ************************************ 00:05:13.570 20:36:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:13.570 * Looking for test storage... 00:05:13.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:13.570 20:36:38 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:13.570 OK 00:05:13.830 20:36:38 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:13.830 00:05:13.830 real 0m0.137s 00:05:13.830 user 0m0.054s 00:05:13.830 sys 0m0.091s 00:05:13.830 20:36:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.830 20:36:38 -- common/autotest_common.sh@10 -- # set +x 00:05:13.830 ************************************ 00:05:13.830 END TEST rpc_client 00:05:13.830 ************************************ 00:05:13.830 20:36:38 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:13.830 20:36:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.830 20:36:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.830 20:36:38 -- common/autotest_common.sh@10 -- # set +x 00:05:13.830 ************************************ 00:05:13.830 START TEST json_config 00:05:13.830 ************************************ 00:05:13.830 20:36:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:14.091 20:36:38 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:14.091 20:36:38 -- nvmf/common.sh@7 -- # uname -s 00:05:14.091 20:36:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:14.091 20:36:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:14.091 20:36:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:14.091 20:36:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:14.091 20:36:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:14.091 20:36:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:14.091 20:36:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:14.091 20:36:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:14.091 20:36:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:14.091 20:36:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:14.091 20:36:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:05:14.091 20:36:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:05:14.091 20:36:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:14.091 20:36:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:14.091 20:36:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:14.091 20:36:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:14.091 20:36:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:14.091 20:36:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:14.091 20:36:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:14.092 20:36:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:14.092 20:36:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.092 20:36:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.092 20:36:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.092 20:36:38 -- paths/export.sh@5 -- # export PATH 00:05:14.092 20:36:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:14.092 20:36:38 -- nvmf/common.sh@47 -- # : 0 00:05:14.092 20:36:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:14.092 20:36:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:14.092 20:36:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:14.092 20:36:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:14.092 20:36:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:14.092 20:36:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:14.092 20:36:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:14.092 20:36:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:14.092 20:36:38 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:14.092 20:36:38 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:14.092 20:36:38 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:14.092 20:36:38 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:14.092 20:36:38 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:14.092 20:36:38 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:14.092 20:36:38 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:14.092 20:36:38 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:14.092 20:36:38 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:14.092 20:36:38 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:14.092 20:36:38 -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:14.092 20:36:38 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:14.092 20:36:38 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:14.092 20:36:38 -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:14.092 20:36:38 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:14.092 20:36:38 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:14.092 INFO: JSON configuration test init 00:05:14.092 20:36:38 -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:14.092 20:36:38 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:14.092 20:36:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:14.092 20:36:38 -- common/autotest_common.sh@10 -- # set +x 00:05:14.092 20:36:38 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:14.092 20:36:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:14.092 20:36:38 -- common/autotest_common.sh@10 -- # set +x 00:05:14.092 20:36:38 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:14.092 20:36:38 -- json_config/common.sh@9 -- # local app=target 00:05:14.092 20:36:38 -- json_config/common.sh@10 -- # shift 00:05:14.092 20:36:38 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:14.092 20:36:38 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:14.092 20:36:38 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:14.092 20:36:38 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.092 20:36:38 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:14.092 20:36:38 -- json_config/common.sh@22 -- # app_pid["$app"]=2571682 00:05:14.092 20:36:38 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:14.092 20:36:38 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:14.092 Waiting for target to run... 00:05:14.092 20:36:38 -- json_config/common.sh@25 -- # waitforlisten 2571682 /var/tmp/spdk_tgt.sock 00:05:14.092 20:36:38 -- common/autotest_common.sh@817 -- # '[' -z 2571682 ']' 00:05:14.092 20:36:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:14.092 20:36:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:14.092 20:36:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:14.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:14.092 20:36:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:14.092 20:36:38 -- common/autotest_common.sh@10 -- # set +x 00:05:14.092 [2024-04-24 20:36:38.552079] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:14.092 [2024-04-24 20:36:38.552127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571682 ] 00:05:14.092 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.354 [2024-04-24 20:36:38.778992] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.354 [2024-04-24 20:36:38.826668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.926 20:36:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:14.926 20:36:39 -- common/autotest_common.sh@850 -- # return 0 00:05:14.926 20:36:39 -- json_config/common.sh@26 -- # echo '' 00:05:14.926 00:05:14.926 20:36:39 -- json_config/json_config.sh@269 -- # create_accel_config 00:05:14.926 20:36:39 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:14.926 20:36:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:14.926 20:36:39 -- common/autotest_common.sh@10 -- # set +x 00:05:14.926 20:36:39 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:14.926 20:36:39 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:14.926 20:36:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:14.926 20:36:39 -- common/autotest_common.sh@10 -- # set +x 00:05:14.926 20:36:39 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:14.926 20:36:39 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:14.926 20:36:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:15.497 20:36:40 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:15.497 20:36:40 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:15.497 20:36:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:15.497 20:36:40 -- common/autotest_common.sh@10 -- # set +x 00:05:15.497 20:36:40 -- json_config/json_config.sh@45 -- # local ret=0 00:05:15.497 20:36:40 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:15.497 20:36:40 -- json_config/json_config.sh@46 -- # local enabled_types 00:05:15.497 20:36:40 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:15.497 20:36:40 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:15.497 20:36:40 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:15.757 20:36:40 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:15.757 20:36:40 -- json_config/json_config.sh@48 -- # local get_types 00:05:15.757 20:36:40 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:15.757 20:36:40 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:15.757 20:36:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:15.757 20:36:40 -- common/autotest_common.sh@10 -- # set +x 00:05:15.757 20:36:40 -- json_config/json_config.sh@55 -- # return 0 00:05:15.757 20:36:40 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:15.757 20:36:40 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:15.757 20:36:40 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:15.757 20:36:40 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:15.757 20:36:40 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:15.757 20:36:40 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:15.757 20:36:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:15.757 20:36:40 -- common/autotest_common.sh@10 -- # set +x 00:05:15.757 20:36:40 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:15.757 20:36:40 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:15.757 20:36:40 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:15.757 20:36:40 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:15.757 20:36:40 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:16.018 MallocForNvmf0 00:05:16.018 20:36:40 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:16.018 20:36:40 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:16.280 MallocForNvmf1 00:05:16.280 20:36:40 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:16.280 20:36:40 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:16.280 [2024-04-24 20:36:40.884336] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.280 20:36:40 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:16.280 20:36:40 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:16.541 20:36:41 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:16.541 20:36:41 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:16.802 20:36:41 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:16.802 20:36:41 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:17.063 20:36:41 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:17.063 20:36:41 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:17.063 [2024-04-24 20:36:41.646767] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:17.063 20:36:41 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:17.063 20:36:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:17.063 20:36:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.063 20:36:41 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:17.063 20:36:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:17.063 20:36:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.324 20:36:41 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:17.324 20:36:41 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:17.324 20:36:41 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:17.324 MallocBdevForConfigChangeCheck 00:05:17.324 20:36:41 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:17.324 20:36:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:17.324 20:36:41 -- common/autotest_common.sh@10 -- # set +x 00:05:17.585 20:36:41 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:17.585 20:36:41 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:17.846 20:36:42 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:17.846 INFO: shutting down applications... 00:05:17.846 20:36:42 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:17.846 20:36:42 -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:17.846 20:36:42 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:17.846 20:36:42 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:18.107 Calling clear_iscsi_subsystem 00:05:18.107 Calling clear_nvmf_subsystem 00:05:18.107 Calling clear_nbd_subsystem 00:05:18.107 Calling clear_ublk_subsystem 00:05:18.107 Calling clear_vhost_blk_subsystem 00:05:18.107 Calling clear_vhost_scsi_subsystem 00:05:18.107 Calling clear_bdev_subsystem 00:05:18.107 20:36:42 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:18.107 20:36:42 -- json_config/json_config.sh@343 -- # count=100 00:05:18.107 20:36:42 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:18.107 20:36:42 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.107 20:36:42 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:18.107 20:36:42 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:18.724 20:36:43 -- json_config/json_config.sh@345 -- # break 00:05:18.724 20:36:43 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:18.724 20:36:43 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:18.724 20:36:43 -- json_config/common.sh@31 -- # local app=target 00:05:18.724 20:36:43 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:18.724 20:36:43 -- json_config/common.sh@35 -- # [[ -n 2571682 ]] 00:05:18.724 20:36:43 -- json_config/common.sh@38 -- # kill -SIGINT 2571682 00:05:18.724 20:36:43 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:18.724 20:36:43 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.724 20:36:43 -- json_config/common.sh@41 -- # kill -0 2571682 00:05:18.724 20:36:43 -- json_config/common.sh@45 -- # sleep 0.5 00:05:18.984 20:36:43 -- json_config/common.sh@40 -- # (( i++ )) 00:05:18.984 20:36:43 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.984 20:36:43 -- json_config/common.sh@41 -- # kill -0 2571682 00:05:18.984 20:36:43 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:18.984 20:36:43 -- json_config/common.sh@43 -- # break 00:05:18.984 20:36:43 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:18.984 20:36:43 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:18.984 SPDK target shutdown done 00:05:18.984 20:36:43 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:18.984 INFO: relaunching applications... 00:05:18.984 20:36:43 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.984 20:36:43 -- json_config/common.sh@9 -- # local app=target 00:05:18.984 20:36:43 -- json_config/common.sh@10 -- # shift 00:05:18.984 20:36:43 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.984 20:36:43 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.984 20:36:43 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.984 20:36:43 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.984 20:36:43 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.984 20:36:43 -- json_config/common.sh@22 -- # app_pid["$app"]=2572812 00:05:18.984 20:36:43 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.984 Waiting for target to run... 00:05:18.984 20:36:43 -- json_config/common.sh@25 -- # waitforlisten 2572812 /var/tmp/spdk_tgt.sock 00:05:18.984 20:36:43 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:18.984 20:36:43 -- common/autotest_common.sh@817 -- # '[' -z 2572812 ']' 00:05:18.984 20:36:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.984 20:36:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:18.984 20:36:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.984 20:36:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:18.984 20:36:43 -- common/autotest_common.sh@10 -- # set +x 00:05:19.247 [2024-04-24 20:36:43.650351] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:19.247 [2024-04-24 20:36:43.650408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572812 ] 00:05:19.247 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.508 [2024-04-24 20:36:44.035838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.508 [2024-04-24 20:36:44.097326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.079 [2024-04-24 20:36:44.586316] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.079 [2024-04-24 20:36:44.618689] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:20.079 20:36:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:20.079 20:36:44 -- common/autotest_common.sh@850 -- # return 0 00:05:20.079 20:36:44 -- json_config/common.sh@26 -- # echo '' 00:05:20.079 00:05:20.079 20:36:44 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:20.079 20:36:44 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:20.079 INFO: Checking if target configuration is the same... 00:05:20.079 20:36:44 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.079 20:36:44 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:20.079 20:36:44 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.079 + '[' 2 -ne 2 ']' 00:05:20.079 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:20.079 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:20.079 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:20.079 +++ basename /dev/fd/62 00:05:20.079 ++ mktemp /tmp/62.XXX 00:05:20.079 + tmp_file_1=/tmp/62.4wa 00:05:20.079 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.079 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:20.079 + tmp_file_2=/tmp/spdk_tgt_config.json.HKt 00:05:20.079 + ret=0 00:05:20.079 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.339 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.600 + diff -u /tmp/62.4wa /tmp/spdk_tgt_config.json.HKt 00:05:20.600 + echo 'INFO: JSON config files are the same' 00:05:20.600 INFO: JSON config files are the same 00:05:20.600 + rm /tmp/62.4wa /tmp/spdk_tgt_config.json.HKt 00:05:20.600 + exit 0 00:05:20.600 20:36:45 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:20.600 20:36:45 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:20.600 INFO: changing configuration and checking if this can be detected... 00:05:20.600 20:36:45 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:20.600 20:36:45 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:20.600 20:36:45 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.600 20:36:45 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:20.600 20:36:45 -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.600 + '[' 2 -ne 2 ']' 00:05:20.600 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:20.600 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:20.600 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:20.600 +++ basename /dev/fd/62 00:05:20.600 ++ mktemp /tmp/62.XXX 00:05:20.860 + tmp_file_1=/tmp/62.qxT 00:05:20.860 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.860 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:20.860 + tmp_file_2=/tmp/spdk_tgt_config.json.hyz 00:05:20.860 + ret=0 00:05:20.860 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.120 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.120 + diff -u /tmp/62.qxT /tmp/spdk_tgt_config.json.hyz 00:05:21.120 + ret=1 00:05:21.120 + echo '=== Start of file: /tmp/62.qxT ===' 00:05:21.120 + cat /tmp/62.qxT 00:05:21.120 + echo '=== End of file: /tmp/62.qxT ===' 00:05:21.120 + echo '' 00:05:21.120 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hyz ===' 00:05:21.120 + cat /tmp/spdk_tgt_config.json.hyz 00:05:21.120 + echo '=== End of file: /tmp/spdk_tgt_config.json.hyz ===' 00:05:21.120 + echo '' 00:05:21.121 + rm /tmp/62.qxT /tmp/spdk_tgt_config.json.hyz 00:05:21.121 + exit 1 00:05:21.121 20:36:45 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:21.121 INFO: configuration change detected. 00:05:21.121 20:36:45 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:21.121 20:36:45 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:21.121 20:36:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:21.121 20:36:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.121 20:36:45 -- json_config/json_config.sh@307 -- # local ret=0 00:05:21.121 20:36:45 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:21.121 20:36:45 -- json_config/json_config.sh@317 -- # [[ -n 2572812 ]] 00:05:21.121 20:36:45 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:21.121 20:36:45 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:21.121 20:36:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:21.121 20:36:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.121 20:36:45 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:21.121 20:36:45 -- json_config/json_config.sh@193 -- # uname -s 00:05:21.121 20:36:45 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:21.121 20:36:45 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:21.121 20:36:45 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:21.121 20:36:45 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:21.121 20:36:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:21.121 20:36:45 -- common/autotest_common.sh@10 -- # set +x 00:05:21.121 20:36:45 -- json_config/json_config.sh@323 -- # killprocess 2572812 00:05:21.121 20:36:45 -- common/autotest_common.sh@936 -- # '[' -z 2572812 ']' 00:05:21.121 20:36:45 -- common/autotest_common.sh@940 -- # kill -0 2572812 00:05:21.121 20:36:45 -- common/autotest_common.sh@941 -- # uname 00:05:21.121 20:36:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:21.121 20:36:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2572812 00:05:21.121 20:36:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:21.121 20:36:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:21.121 20:36:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2572812' 00:05:21.121 killing process with pid 2572812 00:05:21.121 20:36:45 -- common/autotest_common.sh@955 -- # kill 2572812 00:05:21.121 20:36:45 -- common/autotest_common.sh@960 -- # wait 2572812 00:05:21.692 20:36:46 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.692 20:36:46 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:21.692 20:36:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:21.692 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:05:21.692 20:36:46 -- json_config/json_config.sh@328 -- # return 0 00:05:21.692 20:36:46 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:21.692 INFO: Success 00:05:21.692 00:05:21.692 real 0m7.670s 00:05:21.692 user 0m9.745s 00:05:21.692 sys 0m1.758s 00:05:21.692 20:36:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:21.692 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:05:21.692 ************************************ 00:05:21.692 END TEST json_config 00:05:21.692 ************************************ 00:05:21.692 20:36:46 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.692 20:36:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:21.692 20:36:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:21.692 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:05:21.692 ************************************ 00:05:21.692 START TEST json_config_extra_key 00:05:21.692 ************************************ 00:05:21.692 20:36:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:21.954 20:36:46 -- nvmf/common.sh@7 -- # uname -s 00:05:21.954 20:36:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:21.954 20:36:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:21.954 20:36:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:21.954 20:36:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:21.954 20:36:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:21.954 20:36:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:21.954 20:36:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:21.954 20:36:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:21.954 20:36:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:21.954 20:36:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:21.954 20:36:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:05:21.954 20:36:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:05:21.954 20:36:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:21.954 20:36:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:21.954 20:36:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:21.954 20:36:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:21.954 20:36:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:21.954 20:36:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:21.954 20:36:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:21.954 20:36:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:21.954 20:36:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.954 20:36:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.954 20:36:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.954 20:36:46 -- paths/export.sh@5 -- # export PATH 00:05:21.954 20:36:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:21.954 20:36:46 -- nvmf/common.sh@47 -- # : 0 00:05:21.954 20:36:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:21.954 20:36:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:21.954 20:36:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:21.954 20:36:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:21.954 20:36:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:21.954 20:36:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:21.954 20:36:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:21.954 20:36:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:21.954 INFO: launching applications... 00:05:21.954 20:36:46 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.954 20:36:46 -- json_config/common.sh@9 -- # local app=target 00:05:21.954 20:36:46 -- json_config/common.sh@10 -- # shift 00:05:21.954 20:36:46 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:21.954 20:36:46 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:21.954 20:36:46 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:21.954 20:36:46 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.954 20:36:46 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:21.954 20:36:46 -- json_config/common.sh@22 -- # app_pid["$app"]=2573386 00:05:21.954 20:36:46 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:21.954 Waiting for target to run... 00:05:21.954 20:36:46 -- json_config/common.sh@25 -- # waitforlisten 2573386 /var/tmp/spdk_tgt.sock 00:05:21.954 20:36:46 -- common/autotest_common.sh@817 -- # '[' -z 2573386 ']' 00:05:21.954 20:36:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.954 20:36:46 -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:21.954 20:36:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:21.954 20:36:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.954 20:36:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:21.955 20:36:46 -- common/autotest_common.sh@10 -- # set +x 00:05:21.955 [2024-04-24 20:36:46.433233] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:21.955 [2024-04-24 20:36:46.433303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573386 ] 00:05:21.955 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.215 [2024-04-24 20:36:46.737140] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.215 [2024-04-24 20:36:46.786321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.784 20:36:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:22.784 20:36:47 -- common/autotest_common.sh@850 -- # return 0 00:05:22.784 20:36:47 -- json_config/common.sh@26 -- # echo '' 00:05:22.784 00:05:22.784 20:36:47 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:22.784 INFO: shutting down applications... 00:05:22.784 20:36:47 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:22.784 20:36:47 -- json_config/common.sh@31 -- # local app=target 00:05:22.784 20:36:47 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:22.784 20:36:47 -- json_config/common.sh@35 -- # [[ -n 2573386 ]] 00:05:22.784 20:36:47 -- json_config/common.sh@38 -- # kill -SIGINT 2573386 00:05:22.784 20:36:47 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:22.784 20:36:47 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:22.784 20:36:47 -- json_config/common.sh@41 -- # kill -0 2573386 00:05:22.784 20:36:47 -- json_config/common.sh@45 -- # sleep 0.5 00:05:23.354 20:36:47 -- json_config/common.sh@40 -- # (( i++ )) 00:05:23.354 20:36:47 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.354 20:36:47 -- json_config/common.sh@41 -- # kill -0 2573386 00:05:23.354 20:36:47 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:23.354 20:36:47 -- json_config/common.sh@43 -- # break 00:05:23.354 20:36:47 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:23.354 20:36:47 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:23.354 SPDK target shutdown done 00:05:23.354 20:36:47 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:23.354 Success 00:05:23.354 00:05:23.354 real 0m1.526s 00:05:23.354 user 0m1.200s 00:05:23.354 sys 0m0.413s 00:05:23.354 20:36:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:23.354 20:36:47 -- common/autotest_common.sh@10 -- # set +x 00:05:23.354 ************************************ 00:05:23.354 END TEST json_config_extra_key 00:05:23.354 ************************************ 00:05:23.354 20:36:47 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.354 20:36:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.354 20:36:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.354 20:36:47 -- common/autotest_common.sh@10 -- # set +x 00:05:23.354 ************************************ 00:05:23.354 START TEST alias_rpc 00:05:23.354 ************************************ 00:05:23.354 20:36:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:23.614 * Looking for test storage... 00:05:23.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:23.614 20:36:48 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.614 20:36:48 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2573761 00:05:23.614 20:36:48 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2573761 00:05:23.614 20:36:48 -- common/autotest_common.sh@817 -- # '[' -z 2573761 ']' 00:05:23.614 20:36:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.614 20:36:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:23.614 20:36:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.614 20:36:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:23.614 20:36:48 -- common/autotest_common.sh@10 -- # set +x 00:05:23.614 20:36:48 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.614 [2024-04-24 20:36:48.109950] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:23.614 [2024-04-24 20:36:48.110017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2573761 ] 00:05:23.614 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.614 [2024-04-24 20:36:48.190338] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.874 [2024-04-24 20:36:48.259580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.443 20:36:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:24.443 20:36:48 -- common/autotest_common.sh@850 -- # return 0 00:05:24.443 20:36:48 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:24.703 20:36:49 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2573761 00:05:24.703 20:36:49 -- common/autotest_common.sh@936 -- # '[' -z 2573761 ']' 00:05:24.703 20:36:49 -- common/autotest_common.sh@940 -- # kill -0 2573761 00:05:24.703 20:36:49 -- common/autotest_common.sh@941 -- # uname 00:05:24.703 20:36:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.703 20:36:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2573761 00:05:24.703 20:36:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:24.703 20:36:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:24.703 20:36:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2573761' 00:05:24.703 killing process with pid 2573761 00:05:24.703 20:36:49 -- common/autotest_common.sh@955 -- # kill 2573761 00:05:24.703 20:36:49 -- common/autotest_common.sh@960 -- # wait 2573761 00:05:24.963 00:05:24.963 real 0m1.477s 00:05:24.963 user 0m1.739s 00:05:24.963 sys 0m0.369s 00:05:24.963 20:36:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.963 20:36:49 -- common/autotest_common.sh@10 -- # set +x 00:05:24.963 ************************************ 00:05:24.963 END TEST alias_rpc 00:05:24.963 ************************************ 00:05:24.963 20:36:49 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:24.963 20:36:49 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:24.963 20:36:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:24.963 20:36:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:24.963 20:36:49 -- common/autotest_common.sh@10 -- # set +x 00:05:25.224 ************************************ 00:05:25.224 START TEST spdkcli_tcp 00:05:25.224 ************************************ 00:05:25.224 20:36:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.224 * Looking for test storage... 00:05:25.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:25.224 20:36:49 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:25.224 20:36:49 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:25.224 20:36:49 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:25.224 20:36:49 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:25.224 20:36:49 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:25.224 20:36:49 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:25.224 20:36:49 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:25.224 20:36:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:25.224 20:36:49 -- common/autotest_common.sh@10 -- # set +x 00:05:25.224 20:36:49 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2574143 00:05:25.224 20:36:49 -- spdkcli/tcp.sh@27 -- # waitforlisten 2574143 00:05:25.224 20:36:49 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:25.224 20:36:49 -- common/autotest_common.sh@817 -- # '[' -z 2574143 ']' 00:05:25.224 20:36:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.224 20:36:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.224 20:36:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.224 20:36:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.224 20:36:49 -- common/autotest_common.sh@10 -- # set +x 00:05:25.224 [2024-04-24 20:36:49.806077] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:25.224 [2024-04-24 20:36:49.806145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2574143 ] 00:05:25.224 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.484 [2024-04-24 20:36:49.885971] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.484 [2024-04-24 20:36:49.956410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.484 [2024-04-24 20:36:49.956416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.056 20:36:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:26.056 20:36:50 -- common/autotest_common.sh@850 -- # return 0 00:05:26.056 20:36:50 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:26.056 20:36:50 -- spdkcli/tcp.sh@31 -- # socat_pid=2574407 00:05:26.056 20:36:50 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:26.318 [ 00:05:26.318 "bdev_malloc_delete", 00:05:26.318 "bdev_malloc_create", 00:05:26.318 "bdev_null_resize", 00:05:26.318 "bdev_null_delete", 00:05:26.318 "bdev_null_create", 00:05:26.318 "bdev_nvme_cuse_unregister", 00:05:26.318 "bdev_nvme_cuse_register", 00:05:26.318 "bdev_opal_new_user", 00:05:26.318 "bdev_opal_set_lock_state", 00:05:26.318 "bdev_opal_delete", 00:05:26.318 "bdev_opal_get_info", 00:05:26.318 "bdev_opal_create", 00:05:26.318 "bdev_nvme_opal_revert", 00:05:26.318 "bdev_nvme_opal_init", 00:05:26.318 "bdev_nvme_send_cmd", 00:05:26.318 "bdev_nvme_get_path_iostat", 00:05:26.318 "bdev_nvme_get_mdns_discovery_info", 00:05:26.318 "bdev_nvme_stop_mdns_discovery", 00:05:26.318 "bdev_nvme_start_mdns_discovery", 00:05:26.318 "bdev_nvme_set_multipath_policy", 00:05:26.318 "bdev_nvme_set_preferred_path", 00:05:26.318 "bdev_nvme_get_io_paths", 00:05:26.318 "bdev_nvme_remove_error_injection", 00:05:26.318 "bdev_nvme_add_error_injection", 00:05:26.318 "bdev_nvme_get_discovery_info", 00:05:26.318 "bdev_nvme_stop_discovery", 00:05:26.318 "bdev_nvme_start_discovery", 00:05:26.318 "bdev_nvme_get_controller_health_info", 00:05:26.318 "bdev_nvme_disable_controller", 00:05:26.318 "bdev_nvme_enable_controller", 00:05:26.318 "bdev_nvme_reset_controller", 00:05:26.318 "bdev_nvme_get_transport_statistics", 00:05:26.318 "bdev_nvme_apply_firmware", 00:05:26.318 "bdev_nvme_detach_controller", 00:05:26.318 "bdev_nvme_get_controllers", 00:05:26.318 "bdev_nvme_attach_controller", 00:05:26.318 "bdev_nvme_set_hotplug", 00:05:26.318 "bdev_nvme_set_options", 00:05:26.318 "bdev_passthru_delete", 00:05:26.318 "bdev_passthru_create", 00:05:26.318 "bdev_lvol_grow_lvstore", 00:05:26.318 "bdev_lvol_get_lvols", 00:05:26.318 "bdev_lvol_get_lvstores", 00:05:26.318 "bdev_lvol_delete", 00:05:26.318 "bdev_lvol_set_read_only", 00:05:26.318 "bdev_lvol_resize", 00:05:26.318 "bdev_lvol_decouple_parent", 00:05:26.318 "bdev_lvol_inflate", 00:05:26.318 "bdev_lvol_rename", 00:05:26.318 "bdev_lvol_clone_bdev", 00:05:26.318 "bdev_lvol_clone", 00:05:26.318 "bdev_lvol_snapshot", 00:05:26.318 "bdev_lvol_create", 00:05:26.318 "bdev_lvol_delete_lvstore", 00:05:26.318 "bdev_lvol_rename_lvstore", 00:05:26.318 "bdev_lvol_create_lvstore", 00:05:26.318 "bdev_raid_set_options", 00:05:26.318 "bdev_raid_remove_base_bdev", 00:05:26.318 "bdev_raid_add_base_bdev", 00:05:26.318 "bdev_raid_delete", 00:05:26.318 "bdev_raid_create", 00:05:26.318 "bdev_raid_get_bdevs", 00:05:26.318 "bdev_error_inject_error", 00:05:26.318 "bdev_error_delete", 00:05:26.319 "bdev_error_create", 00:05:26.319 "bdev_split_delete", 00:05:26.319 "bdev_split_create", 00:05:26.319 "bdev_delay_delete", 00:05:26.319 "bdev_delay_create", 00:05:26.319 "bdev_delay_update_latency", 00:05:26.319 "bdev_zone_block_delete", 00:05:26.319 "bdev_zone_block_create", 00:05:26.319 "blobfs_create", 00:05:26.319 "blobfs_detect", 00:05:26.319 "blobfs_set_cache_size", 00:05:26.319 "bdev_aio_delete", 00:05:26.319 "bdev_aio_rescan", 00:05:26.319 "bdev_aio_create", 00:05:26.319 "bdev_ftl_set_property", 00:05:26.319 "bdev_ftl_get_properties", 00:05:26.319 "bdev_ftl_get_stats", 00:05:26.319 "bdev_ftl_unmap", 00:05:26.319 "bdev_ftl_unload", 00:05:26.319 "bdev_ftl_delete", 00:05:26.319 "bdev_ftl_load", 00:05:26.319 "bdev_ftl_create", 00:05:26.319 "bdev_virtio_attach_controller", 00:05:26.319 "bdev_virtio_scsi_get_devices", 00:05:26.319 "bdev_virtio_detach_controller", 00:05:26.319 "bdev_virtio_blk_set_hotplug", 00:05:26.319 "bdev_iscsi_delete", 00:05:26.319 "bdev_iscsi_create", 00:05:26.319 "bdev_iscsi_set_options", 00:05:26.319 "accel_error_inject_error", 00:05:26.319 "ioat_scan_accel_module", 00:05:26.319 "dsa_scan_accel_module", 00:05:26.319 "iaa_scan_accel_module", 00:05:26.319 "vfu_virtio_create_scsi_endpoint", 00:05:26.319 "vfu_virtio_scsi_remove_target", 00:05:26.319 "vfu_virtio_scsi_add_target", 00:05:26.319 "vfu_virtio_create_blk_endpoint", 00:05:26.319 "vfu_virtio_delete_endpoint", 00:05:26.319 "keyring_file_remove_key", 00:05:26.319 "keyring_file_add_key", 00:05:26.319 "iscsi_get_histogram", 00:05:26.319 "iscsi_enable_histogram", 00:05:26.319 "iscsi_set_options", 00:05:26.319 "iscsi_get_auth_groups", 00:05:26.319 "iscsi_auth_group_remove_secret", 00:05:26.319 "iscsi_auth_group_add_secret", 00:05:26.319 "iscsi_delete_auth_group", 00:05:26.319 "iscsi_create_auth_group", 00:05:26.319 "iscsi_set_discovery_auth", 00:05:26.319 "iscsi_get_options", 00:05:26.319 "iscsi_target_node_request_logout", 00:05:26.319 "iscsi_target_node_set_redirect", 00:05:26.319 "iscsi_target_node_set_auth", 00:05:26.319 "iscsi_target_node_add_lun", 00:05:26.319 "iscsi_get_stats", 00:05:26.319 "iscsi_get_connections", 00:05:26.319 "iscsi_portal_group_set_auth", 00:05:26.319 "iscsi_start_portal_group", 00:05:26.319 "iscsi_delete_portal_group", 00:05:26.319 "iscsi_create_portal_group", 00:05:26.319 "iscsi_get_portal_groups", 00:05:26.319 "iscsi_delete_target_node", 00:05:26.319 "iscsi_target_node_remove_pg_ig_maps", 00:05:26.319 "iscsi_target_node_add_pg_ig_maps", 00:05:26.319 "iscsi_create_target_node", 00:05:26.319 "iscsi_get_target_nodes", 00:05:26.319 "iscsi_delete_initiator_group", 00:05:26.319 "iscsi_initiator_group_remove_initiators", 00:05:26.319 "iscsi_initiator_group_add_initiators", 00:05:26.319 "iscsi_create_initiator_group", 00:05:26.319 "iscsi_get_initiator_groups", 00:05:26.319 "nvmf_set_crdt", 00:05:26.319 "nvmf_set_config", 00:05:26.319 "nvmf_set_max_subsystems", 00:05:26.319 "nvmf_subsystem_get_listeners", 00:05:26.319 "nvmf_subsystem_get_qpairs", 00:05:26.319 "nvmf_subsystem_get_controllers", 00:05:26.319 "nvmf_get_stats", 00:05:26.319 "nvmf_get_transports", 00:05:26.319 "nvmf_create_transport", 00:05:26.319 "nvmf_get_targets", 00:05:26.319 "nvmf_delete_target", 00:05:26.319 "nvmf_create_target", 00:05:26.319 "nvmf_subsystem_allow_any_host", 00:05:26.319 "nvmf_subsystem_remove_host", 00:05:26.319 "nvmf_subsystem_add_host", 00:05:26.319 "nvmf_ns_remove_host", 00:05:26.319 "nvmf_ns_add_host", 00:05:26.319 "nvmf_subsystem_remove_ns", 00:05:26.319 "nvmf_subsystem_add_ns", 00:05:26.319 "nvmf_subsystem_listener_set_ana_state", 00:05:26.319 "nvmf_discovery_get_referrals", 00:05:26.319 "nvmf_discovery_remove_referral", 00:05:26.319 "nvmf_discovery_add_referral", 00:05:26.319 "nvmf_subsystem_remove_listener", 00:05:26.319 "nvmf_subsystem_add_listener", 00:05:26.319 "nvmf_delete_subsystem", 00:05:26.319 "nvmf_create_subsystem", 00:05:26.319 "nvmf_get_subsystems", 00:05:26.319 "env_dpdk_get_mem_stats", 00:05:26.319 "nbd_get_disks", 00:05:26.319 "nbd_stop_disk", 00:05:26.319 "nbd_start_disk", 00:05:26.319 "ublk_recover_disk", 00:05:26.319 "ublk_get_disks", 00:05:26.319 "ublk_stop_disk", 00:05:26.319 "ublk_start_disk", 00:05:26.319 "ublk_destroy_target", 00:05:26.319 "ublk_create_target", 00:05:26.319 "virtio_blk_create_transport", 00:05:26.319 "virtio_blk_get_transports", 00:05:26.319 "vhost_controller_set_coalescing", 00:05:26.319 "vhost_get_controllers", 00:05:26.319 "vhost_delete_controller", 00:05:26.319 "vhost_create_blk_controller", 00:05:26.319 "vhost_scsi_controller_remove_target", 00:05:26.319 "vhost_scsi_controller_add_target", 00:05:26.319 "vhost_start_scsi_controller", 00:05:26.319 "vhost_create_scsi_controller", 00:05:26.319 "thread_set_cpumask", 00:05:26.319 "framework_get_scheduler", 00:05:26.319 "framework_set_scheduler", 00:05:26.319 "framework_get_reactors", 00:05:26.319 "thread_get_io_channels", 00:05:26.319 "thread_get_pollers", 00:05:26.319 "thread_get_stats", 00:05:26.319 "framework_monitor_context_switch", 00:05:26.319 "spdk_kill_instance", 00:05:26.319 "log_enable_timestamps", 00:05:26.319 "log_get_flags", 00:05:26.319 "log_clear_flag", 00:05:26.319 "log_set_flag", 00:05:26.319 "log_get_level", 00:05:26.319 "log_set_level", 00:05:26.319 "log_get_print_level", 00:05:26.319 "log_set_print_level", 00:05:26.319 "framework_enable_cpumask_locks", 00:05:26.319 "framework_disable_cpumask_locks", 00:05:26.319 "framework_wait_init", 00:05:26.319 "framework_start_init", 00:05:26.320 "scsi_get_devices", 00:05:26.320 "bdev_get_histogram", 00:05:26.320 "bdev_enable_histogram", 00:05:26.320 "bdev_set_qos_limit", 00:05:26.320 "bdev_set_qd_sampling_period", 00:05:26.320 "bdev_get_bdevs", 00:05:26.320 "bdev_reset_iostat", 00:05:26.320 "bdev_get_iostat", 00:05:26.320 "bdev_examine", 00:05:26.320 "bdev_wait_for_examine", 00:05:26.320 "bdev_set_options", 00:05:26.320 "notify_get_notifications", 00:05:26.320 "notify_get_types", 00:05:26.320 "accel_get_stats", 00:05:26.320 "accel_set_options", 00:05:26.320 "accel_set_driver", 00:05:26.320 "accel_crypto_key_destroy", 00:05:26.320 "accel_crypto_keys_get", 00:05:26.320 "accel_crypto_key_create", 00:05:26.320 "accel_assign_opc", 00:05:26.320 "accel_get_module_info", 00:05:26.320 "accel_get_opc_assignments", 00:05:26.320 "vmd_rescan", 00:05:26.320 "vmd_remove_device", 00:05:26.320 "vmd_enable", 00:05:26.320 "sock_set_default_impl", 00:05:26.320 "sock_impl_set_options", 00:05:26.320 "sock_impl_get_options", 00:05:26.320 "iobuf_get_stats", 00:05:26.320 "iobuf_set_options", 00:05:26.320 "keyring_get_keys", 00:05:26.320 "framework_get_pci_devices", 00:05:26.320 "framework_get_config", 00:05:26.320 "framework_get_subsystems", 00:05:26.320 "vfu_tgt_set_base_path", 00:05:26.320 "trace_get_info", 00:05:26.320 "trace_get_tpoint_group_mask", 00:05:26.320 "trace_disable_tpoint_group", 00:05:26.320 "trace_enable_tpoint_group", 00:05:26.320 "trace_clear_tpoint_mask", 00:05:26.320 "trace_set_tpoint_mask", 00:05:26.320 "spdk_get_version", 00:05:26.320 "rpc_get_methods" 00:05:26.320 ] 00:05:26.320 20:36:50 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:26.320 20:36:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:26.320 20:36:50 -- common/autotest_common.sh@10 -- # set +x 00:05:26.320 20:36:50 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:26.320 20:36:50 -- spdkcli/tcp.sh@38 -- # killprocess 2574143 00:05:26.320 20:36:50 -- common/autotest_common.sh@936 -- # '[' -z 2574143 ']' 00:05:26.320 20:36:50 -- common/autotest_common.sh@940 -- # kill -0 2574143 00:05:26.320 20:36:50 -- common/autotest_common.sh@941 -- # uname 00:05:26.320 20:36:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:26.320 20:36:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2574143 00:05:26.582 20:36:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:26.582 20:36:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:26.582 20:36:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2574143' 00:05:26.582 killing process with pid 2574143 00:05:26.582 20:36:50 -- common/autotest_common.sh@955 -- # kill 2574143 00:05:26.582 20:36:50 -- common/autotest_common.sh@960 -- # wait 2574143 00:05:26.582 00:05:26.582 real 0m1.549s 00:05:26.582 user 0m2.954s 00:05:26.582 sys 0m0.448s 00:05:26.582 20:36:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:26.582 20:36:51 -- common/autotest_common.sh@10 -- # set +x 00:05:26.582 ************************************ 00:05:26.582 END TEST spdkcli_tcp 00:05:26.582 ************************************ 00:05:26.582 20:36:51 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.582 20:36:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:26.582 20:36:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:26.582 20:36:51 -- common/autotest_common.sh@10 -- # set +x 00:05:26.844 ************************************ 00:05:26.844 START TEST dpdk_mem_utility 00:05:26.844 ************************************ 00:05:26.844 20:36:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:26.844 * Looking for test storage... 00:05:26.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:26.844 20:36:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:26.844 20:36:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2574557 00:05:26.844 20:36:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2574557 00:05:26.844 20:36:51 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.844 20:36:51 -- common/autotest_common.sh@817 -- # '[' -z 2574557 ']' 00:05:26.844 20:36:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.844 20:36:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:26.844 20:36:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.844 20:36:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:26.844 20:36:51 -- common/autotest_common.sh@10 -- # set +x 00:05:27.105 [2024-04-24 20:36:51.522603] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:27.105 [2024-04-24 20:36:51.522674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2574557 ] 00:05:27.105 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.105 [2024-04-24 20:36:51.601202] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.105 [2024-04-24 20:36:51.670619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.049 20:36:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:28.049 20:36:52 -- common/autotest_common.sh@850 -- # return 0 00:05:28.049 20:36:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:28.049 20:36:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:28.049 20:36:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:28.049 20:36:52 -- common/autotest_common.sh@10 -- # set +x 00:05:28.049 { 00:05:28.049 "filename": "/tmp/spdk_mem_dump.txt" 00:05:28.049 } 00:05:28.049 20:36:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:28.049 20:36:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:28.049 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:28.049 1 heaps totaling size 814.000000 MiB 00:05:28.049 size: 814.000000 MiB heap id: 0 00:05:28.049 end heaps---------- 00:05:28.049 8 mempools totaling size 598.116089 MiB 00:05:28.049 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:28.049 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:28.049 size: 84.521057 MiB name: bdev_io_2574557 00:05:28.049 size: 51.011292 MiB name: evtpool_2574557 00:05:28.049 size: 50.003479 MiB name: msgpool_2574557 00:05:28.049 size: 21.763794 MiB name: PDU_Pool 00:05:28.049 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:28.049 size: 0.026123 MiB name: Session_Pool 00:05:28.049 end mempools------- 00:05:28.049 6 memzones totaling size 4.142822 MiB 00:05:28.049 size: 1.000366 MiB name: RG_ring_0_2574557 00:05:28.049 size: 1.000366 MiB name: RG_ring_1_2574557 00:05:28.049 size: 1.000366 MiB name: RG_ring_4_2574557 00:05:28.049 size: 1.000366 MiB name: RG_ring_5_2574557 00:05:28.049 size: 0.125366 MiB name: RG_ring_2_2574557 00:05:28.049 size: 0.015991 MiB name: RG_ring_3_2574557 00:05:28.049 end memzones------- 00:05:28.049 20:36:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:28.049 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:28.049 list of free elements. size: 12.519348 MiB 00:05:28.049 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:28.049 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:28.049 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:28.049 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:28.049 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:28.049 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:28.049 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:28.049 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:28.049 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:28.049 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:28.049 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:28.049 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:28.049 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:28.049 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:28.049 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:28.049 list of standard malloc elements. size: 199.218079 MiB 00:05:28.049 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:28.049 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:28.049 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:28.049 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:28.049 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:28.049 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:28.049 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:28.049 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:28.049 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:28.049 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:28.049 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:28.049 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:28.049 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:28.049 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:28.049 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:28.049 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:28.049 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:28.049 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:28.049 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:28.049 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:28.049 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:28.049 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:28.049 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:28.049 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:28.049 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:28.049 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:28.049 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:28.049 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:28.049 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:28.049 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:28.049 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:28.049 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:28.049 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:28.049 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:28.049 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:28.049 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:28.049 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:28.049 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:28.049 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:28.049 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:28.049 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:28.049 list of memzone associated elements. size: 602.262573 MiB 00:05:28.049 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:28.049 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:28.049 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:28.050 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:28.050 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:28.050 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2574557_0 00:05:28.050 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:28.050 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2574557_0 00:05:28.050 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:28.050 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2574557_0 00:05:28.050 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:28.050 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:28.050 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:28.050 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:28.050 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:28.050 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2574557 00:05:28.050 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:28.050 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2574557 00:05:28.050 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:28.050 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2574557 00:05:28.050 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:28.050 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:28.050 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:28.050 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:28.050 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:28.050 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:28.050 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:28.050 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:28.050 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:28.050 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2574557 00:05:28.050 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:28.050 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2574557 00:05:28.050 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:28.050 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2574557 00:05:28.050 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:28.050 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2574557 00:05:28.050 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:28.050 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2574557 00:05:28.050 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:28.050 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:28.050 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:28.050 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:28.050 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:28.050 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:28.050 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:28.050 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2574557 00:05:28.050 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:28.050 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:28.050 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:28.050 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:28.050 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:28.050 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2574557 00:05:28.050 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:28.050 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:28.050 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:28.050 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2574557 00:05:28.050 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:28.050 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2574557 00:05:28.050 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:28.050 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:28.050 20:36:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:28.050 20:36:52 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2574557 00:05:28.050 20:36:52 -- common/autotest_common.sh@936 -- # '[' -z 2574557 ']' 00:05:28.050 20:36:52 -- common/autotest_common.sh@940 -- # kill -0 2574557 00:05:28.050 20:36:52 -- common/autotest_common.sh@941 -- # uname 00:05:28.050 20:36:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:28.050 20:36:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2574557 00:05:28.050 20:36:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:28.050 20:36:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:28.050 20:36:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2574557' 00:05:28.050 killing process with pid 2574557 00:05:28.050 20:36:52 -- common/autotest_common.sh@955 -- # kill 2574557 00:05:28.050 20:36:52 -- common/autotest_common.sh@960 -- # wait 2574557 00:05:28.312 00:05:28.312 real 0m1.379s 00:05:28.312 user 0m1.534s 00:05:28.312 sys 0m0.379s 00:05:28.312 20:36:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:28.312 20:36:52 -- common/autotest_common.sh@10 -- # set +x 00:05:28.312 ************************************ 00:05:28.312 END TEST dpdk_mem_utility 00:05:28.312 ************************************ 00:05:28.312 20:36:52 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.312 20:36:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:28.312 20:36:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.312 20:36:52 -- common/autotest_common.sh@10 -- # set +x 00:05:28.312 ************************************ 00:05:28.312 START TEST event 00:05:28.312 ************************************ 00:05:28.312 20:36:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.574 * Looking for test storage... 00:05:28.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:28.574 20:36:53 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:28.574 20:36:53 -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.574 20:36:53 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.574 20:36:53 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:28.574 20:36:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:28.574 20:36:53 -- common/autotest_common.sh@10 -- # set +x 00:05:28.574 ************************************ 00:05:28.574 START TEST event_perf 00:05:28.574 ************************************ 00:05:28.574 20:36:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.836 Running I/O for 1 seconds...[2024-04-24 20:36:53.215937] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:28.836 [2024-04-24 20:36:53.216042] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2574980 ] 00:05:28.836 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.836 [2024-04-24 20:36:53.298143] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.836 [2024-04-24 20:36:53.379509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.836 [2024-04-24 20:36:53.379646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.836 [2024-04-24 20:36:53.379786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.836 [2024-04-24 20:36:53.379992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.223 Running I/O for 1 seconds... 00:05:30.223 lcore 0: 170413 00:05:30.223 lcore 1: 170410 00:05:30.223 lcore 2: 170408 00:05:30.223 lcore 3: 170411 00:05:30.223 done. 00:05:30.223 00:05:30.223 real 0m1.239s 00:05:30.223 user 0m4.142s 00:05:30.223 sys 0m0.095s 00:05:30.223 20:36:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:30.223 20:36:54 -- common/autotest_common.sh@10 -- # set +x 00:05:30.223 ************************************ 00:05:30.223 END TEST event_perf 00:05:30.223 ************************************ 00:05:30.223 20:36:54 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:30.223 20:36:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:30.223 20:36:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.223 20:36:54 -- common/autotest_common.sh@10 -- # set +x 00:05:30.223 ************************************ 00:05:30.223 START TEST event_reactor 00:05:30.223 ************************************ 00:05:30.223 20:36:54 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:30.223 [2024-04-24 20:36:54.634248] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:30.223 [2024-04-24 20:36:54.634345] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575264 ] 00:05:30.223 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.223 [2024-04-24 20:36:54.718046] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.223 [2024-04-24 20:36:54.793878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.647 test_start 00:05:31.647 oneshot 00:05:31.647 tick 100 00:05:31.647 tick 100 00:05:31.647 tick 250 00:05:31.647 tick 100 00:05:31.647 tick 100 00:05:31.647 tick 250 00:05:31.647 tick 100 00:05:31.647 tick 500 00:05:31.647 tick 100 00:05:31.647 tick 100 00:05:31.647 tick 250 00:05:31.647 tick 100 00:05:31.647 tick 100 00:05:31.647 test_end 00:05:31.647 00:05:31.647 real 0m1.235s 00:05:31.647 user 0m1.144s 00:05:31.647 sys 0m0.085s 00:05:31.647 20:36:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:31.647 20:36:55 -- common/autotest_common.sh@10 -- # set +x 00:05:31.647 ************************************ 00:05:31.647 END TEST event_reactor 00:05:31.647 ************************************ 00:05:31.647 20:36:55 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.647 20:36:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:31.647 20:36:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:31.647 20:36:55 -- common/autotest_common.sh@10 -- # set +x 00:05:31.647 ************************************ 00:05:31.647 START TEST event_reactor_perf 00:05:31.647 ************************************ 00:05:31.647 20:36:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:31.647 [2024-04-24 20:36:56.054466] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:31.647 [2024-04-24 20:36:56.054572] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2575620 ] 00:05:31.647 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.647 [2024-04-24 20:36:56.135819] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.647 [2024-04-24 20:36:56.210806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.035 test_start 00:05:33.035 test_end 00:05:33.035 Performance: 364870 events per second 00:05:33.035 00:05:33.035 real 0m1.231s 00:05:33.035 user 0m1.138s 00:05:33.035 sys 0m0.088s 00:05:33.035 20:36:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.035 20:36:57 -- common/autotest_common.sh@10 -- # set +x 00:05:33.035 ************************************ 00:05:33.035 END TEST event_reactor_perf 00:05:33.035 ************************************ 00:05:33.035 20:36:57 -- event/event.sh@49 -- # uname -s 00:05:33.035 20:36:57 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:33.035 20:36:57 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:33.035 20:36:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.035 20:36:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.035 20:36:57 -- common/autotest_common.sh@10 -- # set +x 00:05:33.035 ************************************ 00:05:33.035 START TEST event_scheduler 00:05:33.035 ************************************ 00:05:33.035 20:36:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:33.035 * Looking for test storage... 00:05:33.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:33.035 20:36:57 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:33.035 20:36:57 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2576009 00:05:33.035 20:36:57 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.035 20:36:57 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:33.035 20:36:57 -- scheduler/scheduler.sh@37 -- # waitforlisten 2576009 00:05:33.035 20:36:57 -- common/autotest_common.sh@817 -- # '[' -z 2576009 ']' 00:05:33.035 20:36:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.035 20:36:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:33.035 20:36:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.035 20:36:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:33.035 20:36:57 -- common/autotest_common.sh@10 -- # set +x 00:05:33.035 [2024-04-24 20:36:57.615518] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:33.035 [2024-04-24 20:36:57.615587] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576009 ] 00:05:33.035 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.035 [2024-04-24 20:36:57.673610] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:33.296 [2024-04-24 20:36:57.736701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.296 [2024-04-24 20:36:57.736846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.296 [2024-04-24 20:36:57.737078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:33.296 [2024-04-24 20:36:57.737078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.296 20:36:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:33.296 20:36:57 -- common/autotest_common.sh@850 -- # return 0 00:05:33.296 20:36:57 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:33.296 20:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.296 20:36:57 -- common/autotest_common.sh@10 -- # set +x 00:05:33.296 POWER: Env isn't set yet! 00:05:33.296 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:33.296 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:33.296 POWER: Cannot set governor of lcore 0 to userspace 00:05:33.296 POWER: Attempting to initialise PSTAT power management... 00:05:33.296 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:33.296 POWER: Initialized successfully for lcore 0 power management 00:05:33.296 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:33.296 POWER: Initialized successfully for lcore 1 power management 00:05:33.296 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:33.296 POWER: Initialized successfully for lcore 2 power management 00:05:33.296 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:33.296 POWER: Initialized successfully for lcore 3 power management 00:05:33.296 20:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.296 20:36:57 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:33.296 20:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.296 20:36:57 -- common/autotest_common.sh@10 -- # set +x 00:05:33.296 [2024-04-24 20:36:57.897388] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:33.296 20:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.296 20:36:57 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:33.296 20:36:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.297 20:36:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.297 20:36:57 -- common/autotest_common.sh@10 -- # set +x 00:05:33.558 ************************************ 00:05:33.558 START TEST scheduler_create_thread 00:05:33.558 ************************************ 00:05:33.558 20:36:58 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:33.558 20:36:58 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:33.558 20:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.558 20:36:58 -- common/autotest_common.sh@10 -- # set +x 00:05:33.558 2 00:05:33.558 20:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.558 20:36:58 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:33.558 20:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.558 20:36:58 -- common/autotest_common.sh@10 -- # set +x 00:05:33.558 3 00:05:33.558 20:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.558 20:36:58 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:33.558 20:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.558 20:36:58 -- common/autotest_common.sh@10 -- # set +x 00:05:33.558 4 00:05:33.558 20:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.558 20:36:58 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:33.558 20:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.558 20:36:58 -- common/autotest_common.sh@10 -- # set +x 00:05:33.558 5 00:05:33.558 20:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.558 20:36:58 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:33.558 20:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.558 20:36:58 -- common/autotest_common.sh@10 -- # set +x 00:05:33.558 6 00:05:33.558 20:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.558 20:36:58 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:33.558 20:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.558 20:36:58 -- common/autotest_common.sh@10 -- # set +x 00:05:33.558 7 00:05:33.558 20:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.558 20:36:58 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:33.558 20:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.558 20:36:58 -- common/autotest_common.sh@10 -- # set +x 00:05:33.558 8 00:05:33.558 20:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.558 20:36:58 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:33.558 20:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.558 20:36:58 -- common/autotest_common.sh@10 -- # set +x 00:05:33.558 9 00:05:33.558 20:36:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:33.558 20:36:58 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:33.558 20:36:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:33.558 20:36:58 -- common/autotest_common.sh@10 -- # set +x 00:05:34.943 10 00:05:34.943 20:36:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:34.943 20:36:59 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:34.943 20:36:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:34.943 20:36:59 -- common/autotest_common.sh@10 -- # set +x 00:05:36.327 20:37:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:36.327 20:37:00 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:36.327 20:37:00 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:36.327 20:37:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.327 20:37:00 -- common/autotest_common.sh@10 -- # set +x 00:05:36.897 20:37:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:36.897 20:37:01 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:36.897 20:37:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:36.897 20:37:01 -- common/autotest_common.sh@10 -- # set +x 00:05:38.281 20:37:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:38.281 20:37:02 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:38.281 20:37:02 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:38.281 20:37:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:38.281 20:37:02 -- common/autotest_common.sh@10 -- # set +x 00:05:38.852 20:37:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:38.852 00:05:38.852 real 0m5.198s 00:05:38.852 user 0m0.025s 00:05:38.852 sys 0m0.007s 00:05:38.852 20:37:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.852 20:37:03 -- common/autotest_common.sh@10 -- # set +x 00:05:38.852 ************************************ 00:05:38.852 END TEST scheduler_create_thread 00:05:38.852 ************************************ 00:05:38.852 20:37:03 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:38.852 20:37:03 -- scheduler/scheduler.sh@46 -- # killprocess 2576009 00:05:38.852 20:37:03 -- common/autotest_common.sh@936 -- # '[' -z 2576009 ']' 00:05:38.852 20:37:03 -- common/autotest_common.sh@940 -- # kill -0 2576009 00:05:38.852 20:37:03 -- common/autotest_common.sh@941 -- # uname 00:05:38.852 20:37:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.852 20:37:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2576009 00:05:38.852 20:37:03 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:38.852 20:37:03 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:38.852 20:37:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2576009' 00:05:38.852 killing process with pid 2576009 00:05:38.852 20:37:03 -- common/autotest_common.sh@955 -- # kill 2576009 00:05:38.852 20:37:03 -- common/autotest_common.sh@960 -- # wait 2576009 00:05:39.112 [2024-04-24 20:37:03.529158] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:39.112 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:39.112 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:39.112 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:39.112 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:39.112 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:39.112 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:39.112 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:39.112 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:39.112 00:05:39.112 real 0m6.255s 00:05:39.112 user 0m12.199s 00:05:39.112 sys 0m0.414s 00:05:39.112 20:37:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.112 20:37:03 -- common/autotest_common.sh@10 -- # set +x 00:05:39.112 ************************************ 00:05:39.112 END TEST event_scheduler 00:05:39.112 ************************************ 00:05:39.373 20:37:03 -- event/event.sh@51 -- # modprobe -n nbd 00:05:39.373 20:37:03 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:39.373 20:37:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.373 20:37:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.373 20:37:03 -- common/autotest_common.sh@10 -- # set +x 00:05:39.373 ************************************ 00:05:39.373 START TEST app_repeat 00:05:39.373 ************************************ 00:05:39.373 20:37:03 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:39.373 20:37:03 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.373 20:37:03 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.373 20:37:03 -- event/event.sh@13 -- # local nbd_list 00:05:39.373 20:37:03 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.373 20:37:03 -- event/event.sh@14 -- # local bdev_list 00:05:39.373 20:37:03 -- event/event.sh@15 -- # local repeat_times=4 00:05:39.373 20:37:03 -- event/event.sh@17 -- # modprobe nbd 00:05:39.373 20:37:03 -- event/event.sh@19 -- # repeat_pid=2577410 00:05:39.373 20:37:03 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.373 20:37:03 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:39.373 20:37:03 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2577410' 00:05:39.373 Process app_repeat pid: 2577410 00:05:39.373 20:37:03 -- event/event.sh@23 -- # for i in {0..2} 00:05:39.373 20:37:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:39.373 spdk_app_start Round 0 00:05:39.373 20:37:03 -- event/event.sh@25 -- # waitforlisten 2577410 /var/tmp/spdk-nbd.sock 00:05:39.373 20:37:03 -- common/autotest_common.sh@817 -- # '[' -z 2577410 ']' 00:05:39.373 20:37:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.373 20:37:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:39.373 20:37:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.373 20:37:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:39.373 20:37:03 -- common/autotest_common.sh@10 -- # set +x 00:05:39.373 [2024-04-24 20:37:03.946045] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:39.373 [2024-04-24 20:37:03.946111] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2577410 ] 00:05:39.373 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.634 [2024-04-24 20:37:04.025506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.634 [2024-04-24 20:37:04.096637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.634 [2024-04-24 20:37:04.096643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.634 20:37:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:39.634 20:37:04 -- common/autotest_common.sh@850 -- # return 0 00:05:39.634 20:37:04 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.893 Malloc0 00:05:39.893 20:37:04 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.154 Malloc1 00:05:40.154 20:37:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@12 -- # local i 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.154 20:37:04 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.415 /dev/nbd0 00:05:40.415 20:37:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.415 20:37:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.415 20:37:04 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:40.415 20:37:04 -- common/autotest_common.sh@855 -- # local i 00:05:40.415 20:37:04 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:40.415 20:37:04 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:40.415 20:37:04 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:40.415 20:37:04 -- common/autotest_common.sh@859 -- # break 00:05:40.415 20:37:04 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:40.415 20:37:04 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:40.415 20:37:04 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.415 1+0 records in 00:05:40.415 1+0 records out 00:05:40.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207325 s, 19.8 MB/s 00:05:40.415 20:37:04 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.415 20:37:04 -- common/autotest_common.sh@872 -- # size=4096 00:05:40.415 20:37:04 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.415 20:37:04 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:40.415 20:37:04 -- common/autotest_common.sh@875 -- # return 0 00:05:40.415 20:37:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.415 20:37:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.415 20:37:04 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.415 /dev/nbd1 00:05:40.415 20:37:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.415 20:37:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.415 20:37:05 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:40.415 20:37:05 -- common/autotest_common.sh@855 -- # local i 00:05:40.415 20:37:05 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:40.415 20:37:05 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:40.415 20:37:05 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:40.415 20:37:05 -- common/autotest_common.sh@859 -- # break 00:05:40.415 20:37:05 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:40.415 20:37:05 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:40.415 20:37:05 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.415 1+0 records in 00:05:40.415 1+0 records out 00:05:40.415 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246567 s, 16.6 MB/s 00:05:40.416 20:37:05 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.416 20:37:05 -- common/autotest_common.sh@872 -- # size=4096 00:05:40.416 20:37:05 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.416 20:37:05 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:40.416 20:37:05 -- common/autotest_common.sh@875 -- # return 0 00:05:40.416 20:37:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.416 20:37:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.416 20:37:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.416 20:37:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.416 20:37:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.676 { 00:05:40.676 "nbd_device": "/dev/nbd0", 00:05:40.676 "bdev_name": "Malloc0" 00:05:40.676 }, 00:05:40.676 { 00:05:40.676 "nbd_device": "/dev/nbd1", 00:05:40.676 "bdev_name": "Malloc1" 00:05:40.676 } 00:05:40.676 ]' 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.676 { 00:05:40.676 "nbd_device": "/dev/nbd0", 00:05:40.676 "bdev_name": "Malloc0" 00:05:40.676 }, 00:05:40.676 { 00:05:40.676 "nbd_device": "/dev/nbd1", 00:05:40.676 "bdev_name": "Malloc1" 00:05:40.676 } 00:05:40.676 ]' 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.676 /dev/nbd1' 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.676 /dev/nbd1' 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.676 20:37:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.677 20:37:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.677 256+0 records in 00:05:40.677 256+0 records out 00:05:40.677 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115634 s, 90.7 MB/s 00:05:40.677 20:37:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.677 20:37:05 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.938 256+0 records in 00:05:40.938 256+0 records out 00:05:40.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0343456 s, 30.5 MB/s 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.938 256+0 records in 00:05:40.938 256+0 records out 00:05:40.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162906 s, 64.4 MB/s 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@51 -- # local i 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.938 20:37:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@41 -- # break 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@41 -- # break 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.198 20:37:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.458 20:37:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.458 20:37:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.458 20:37:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.458 20:37:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.458 20:37:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.458 20:37:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.458 20:37:06 -- bdev/nbd_common.sh@65 -- # true 00:05:41.458 20:37:06 -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.458 20:37:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.458 20:37:06 -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.458 20:37:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.458 20:37:06 -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.458 20:37:06 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.719 20:37:06 -- event/event.sh@35 -- # sleep 3 00:05:41.980 [2024-04-24 20:37:06.422836] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.980 [2024-04-24 20:37:06.485267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.980 [2024-04-24 20:37:06.485272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.980 [2024-04-24 20:37:06.517076] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.980 [2024-04-24 20:37:06.517112] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.280 20:37:09 -- event/event.sh@23 -- # for i in {0..2} 00:05:45.280 20:37:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:45.280 spdk_app_start Round 1 00:05:45.280 20:37:09 -- event/event.sh@25 -- # waitforlisten 2577410 /var/tmp/spdk-nbd.sock 00:05:45.280 20:37:09 -- common/autotest_common.sh@817 -- # '[' -z 2577410 ']' 00:05:45.280 20:37:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.280 20:37:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.280 20:37:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.280 20:37:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.280 20:37:09 -- common/autotest_common.sh@10 -- # set +x 00:05:45.280 20:37:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:45.280 20:37:09 -- common/autotest_common.sh@850 -- # return 0 00:05:45.280 20:37:09 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.280 Malloc0 00:05:45.280 20:37:09 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.280 Malloc1 00:05:45.280 20:37:09 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@12 -- # local i 00:05:45.280 20:37:09 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.281 20:37:09 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.281 20:37:09 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.541 /dev/nbd0 00:05:45.542 20:37:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.542 20:37:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.542 20:37:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:45.542 20:37:10 -- common/autotest_common.sh@855 -- # local i 00:05:45.542 20:37:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:45.542 20:37:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:45.542 20:37:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:45.542 20:37:10 -- common/autotest_common.sh@859 -- # break 00:05:45.542 20:37:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:45.542 20:37:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:45.542 20:37:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.542 1+0 records in 00:05:45.542 1+0 records out 00:05:45.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336505 s, 12.2 MB/s 00:05:45.542 20:37:10 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.542 20:37:10 -- common/autotest_common.sh@872 -- # size=4096 00:05:45.542 20:37:10 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.542 20:37:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:45.542 20:37:10 -- common/autotest_common.sh@875 -- # return 0 00:05:45.542 20:37:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.542 20:37:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.542 20:37:10 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.823 /dev/nbd1 00:05:45.823 20:37:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.823 20:37:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.823 20:37:10 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:45.823 20:37:10 -- common/autotest_common.sh@855 -- # local i 00:05:45.823 20:37:10 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:45.823 20:37:10 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:45.823 20:37:10 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:45.823 20:37:10 -- common/autotest_common.sh@859 -- # break 00:05:45.823 20:37:10 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:45.823 20:37:10 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:45.823 20:37:10 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.823 1+0 records in 00:05:45.823 1+0 records out 00:05:45.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212828 s, 19.2 MB/s 00:05:45.823 20:37:10 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.823 20:37:10 -- common/autotest_common.sh@872 -- # size=4096 00:05:45.823 20:37:10 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.823 20:37:10 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:45.823 20:37:10 -- common/autotest_common.sh@875 -- # return 0 00:05:45.823 20:37:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.823 20:37:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.823 20:37:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.823 20:37:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.823 20:37:10 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.116 20:37:10 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.116 { 00:05:46.116 "nbd_device": "/dev/nbd0", 00:05:46.116 "bdev_name": "Malloc0" 00:05:46.116 }, 00:05:46.116 { 00:05:46.116 "nbd_device": "/dev/nbd1", 00:05:46.116 "bdev_name": "Malloc1" 00:05:46.116 } 00:05:46.116 ]' 00:05:46.116 20:37:10 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.116 { 00:05:46.116 "nbd_device": "/dev/nbd0", 00:05:46.116 "bdev_name": "Malloc0" 00:05:46.116 }, 00:05:46.116 { 00:05:46.116 "nbd_device": "/dev/nbd1", 00:05:46.116 "bdev_name": "Malloc1" 00:05:46.116 } 00:05:46.116 ]' 00:05:46.116 20:37:10 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.116 20:37:10 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.116 /dev/nbd1' 00:05:46.116 20:37:10 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.116 /dev/nbd1' 00:05:46.116 20:37:10 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.116 20:37:10 -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.116 20:37:10 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.116 20:37:10 -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.117 256+0 records in 00:05:46.117 256+0 records out 00:05:46.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124916 s, 83.9 MB/s 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.117 256+0 records in 00:05:46.117 256+0 records out 00:05:46.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015531 s, 67.5 MB/s 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.117 256+0 records in 00:05:46.117 256+0 records out 00:05:46.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172007 s, 61.0 MB/s 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@51 -- # local i 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.117 20:37:10 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.378 20:37:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.378 20:37:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.378 20:37:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.378 20:37:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.378 20:37:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.378 20:37:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.378 20:37:10 -- bdev/nbd_common.sh@41 -- # break 00:05:46.378 20:37:10 -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.378 20:37:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.378 20:37:10 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.638 20:37:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.638 20:37:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.638 20:37:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.638 20:37:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.638 20:37:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.638 20:37:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.638 20:37:11 -- bdev/nbd_common.sh@41 -- # break 00:05:46.638 20:37:11 -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.638 20:37:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.638 20:37:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.638 20:37:11 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.898 20:37:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.898 20:37:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.898 20:37:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.898 20:37:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.898 20:37:11 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.898 20:37:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.898 20:37:11 -- bdev/nbd_common.sh@65 -- # true 00:05:46.898 20:37:11 -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.898 20:37:11 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.898 20:37:11 -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.898 20:37:11 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.898 20:37:11 -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.898 20:37:11 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.158 20:37:11 -- event/event.sh@35 -- # sleep 3 00:05:47.158 [2024-04-24 20:37:11.729953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.158 [2024-04-24 20:37:11.792270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.158 [2024-04-24 20:37:11.792276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.419 [2024-04-24 20:37:11.824932] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.419 [2024-04-24 20:37:11.824966] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.965 20:37:14 -- event/event.sh@23 -- # for i in {0..2} 00:05:49.965 20:37:14 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:49.965 spdk_app_start Round 2 00:05:49.965 20:37:14 -- event/event.sh@25 -- # waitforlisten 2577410 /var/tmp/spdk-nbd.sock 00:05:49.965 20:37:14 -- common/autotest_common.sh@817 -- # '[' -z 2577410 ']' 00:05:49.965 20:37:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.965 20:37:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:49.965 20:37:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.965 20:37:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:49.965 20:37:14 -- common/autotest_common.sh@10 -- # set +x 00:05:50.226 20:37:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:50.226 20:37:14 -- common/autotest_common.sh@850 -- # return 0 00:05:50.226 20:37:14 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.486 Malloc0 00:05:50.486 20:37:14 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.486 Malloc1 00:05:50.486 20:37:15 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@12 -- # local i 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.486 20:37:15 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.746 /dev/nbd0 00:05:50.746 20:37:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.746 20:37:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.746 20:37:15 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:50.746 20:37:15 -- common/autotest_common.sh@855 -- # local i 00:05:50.746 20:37:15 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:50.746 20:37:15 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:50.746 20:37:15 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:50.746 20:37:15 -- common/autotest_common.sh@859 -- # break 00:05:50.746 20:37:15 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:50.746 20:37:15 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:50.746 20:37:15 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.746 1+0 records in 00:05:50.746 1+0 records out 00:05:50.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281053 s, 14.6 MB/s 00:05:50.746 20:37:15 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.746 20:37:15 -- common/autotest_common.sh@872 -- # size=4096 00:05:50.746 20:37:15 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.746 20:37:15 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:50.746 20:37:15 -- common/autotest_common.sh@875 -- # return 0 00:05:50.746 20:37:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.746 20:37:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.746 20:37:15 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.006 /dev/nbd1 00:05:51.006 20:37:15 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.006 20:37:15 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.006 20:37:15 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:51.006 20:37:15 -- common/autotest_common.sh@855 -- # local i 00:05:51.006 20:37:15 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:51.006 20:37:15 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:51.006 20:37:15 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:51.006 20:37:15 -- common/autotest_common.sh@859 -- # break 00:05:51.006 20:37:15 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:51.006 20:37:15 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:51.006 20:37:15 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.006 1+0 records in 00:05:51.006 1+0 records out 00:05:51.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000293099 s, 14.0 MB/s 00:05:51.006 20:37:15 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.006 20:37:15 -- common/autotest_common.sh@872 -- # size=4096 00:05:51.006 20:37:15 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.006 20:37:15 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:51.006 20:37:15 -- common/autotest_common.sh@875 -- # return 0 00:05:51.006 20:37:15 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.006 20:37:15 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.006 20:37:15 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.006 20:37:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.006 20:37:15 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.280 { 00:05:51.280 "nbd_device": "/dev/nbd0", 00:05:51.280 "bdev_name": "Malloc0" 00:05:51.280 }, 00:05:51.280 { 00:05:51.280 "nbd_device": "/dev/nbd1", 00:05:51.280 "bdev_name": "Malloc1" 00:05:51.280 } 00:05:51.280 ]' 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.280 { 00:05:51.280 "nbd_device": "/dev/nbd0", 00:05:51.280 "bdev_name": "Malloc0" 00:05:51.280 }, 00:05:51.280 { 00:05:51.280 "nbd_device": "/dev/nbd1", 00:05:51.280 "bdev_name": "Malloc1" 00:05:51.280 } 00:05:51.280 ]' 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.280 /dev/nbd1' 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.280 /dev/nbd1' 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.280 256+0 records in 00:05:51.280 256+0 records out 00:05:51.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114902 s, 91.3 MB/s 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.280 256+0 records in 00:05:51.280 256+0 records out 00:05:51.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0358362 s, 29.3 MB/s 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.280 256+0 records in 00:05:51.280 256+0 records out 00:05:51.280 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195134 s, 53.7 MB/s 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@51 -- # local i 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.280 20:37:15 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@41 -- # break 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@41 -- # break 00:05:51.541 20:37:16 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.801 20:37:16 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.801 20:37:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.801 20:37:16 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.802 20:37:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.802 20:37:16 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.802 20:37:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.802 20:37:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.802 20:37:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.802 20:37:16 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.802 20:37:16 -- bdev/nbd_common.sh@65 -- # true 00:05:51.802 20:37:16 -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.802 20:37:16 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.802 20:37:16 -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.802 20:37:16 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.802 20:37:16 -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.802 20:37:16 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.062 20:37:16 -- event/event.sh@35 -- # sleep 3 00:05:52.323 [2024-04-24 20:37:16.731190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.323 [2024-04-24 20:37:16.793518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.323 [2024-04-24 20:37:16.793523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.323 [2024-04-24 20:37:16.825351] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.323 [2024-04-24 20:37:16.825385] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.628 20:37:19 -- event/event.sh@38 -- # waitforlisten 2577410 /var/tmp/spdk-nbd.sock 00:05:55.628 20:37:19 -- common/autotest_common.sh@817 -- # '[' -z 2577410 ']' 00:05:55.628 20:37:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.628 20:37:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:55.628 20:37:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.628 20:37:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:55.628 20:37:19 -- common/autotest_common.sh@10 -- # set +x 00:05:55.628 20:37:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:55.628 20:37:19 -- common/autotest_common.sh@850 -- # return 0 00:05:55.628 20:37:19 -- event/event.sh@39 -- # killprocess 2577410 00:05:55.628 20:37:19 -- common/autotest_common.sh@936 -- # '[' -z 2577410 ']' 00:05:55.628 20:37:19 -- common/autotest_common.sh@940 -- # kill -0 2577410 00:05:55.628 20:37:19 -- common/autotest_common.sh@941 -- # uname 00:05:55.628 20:37:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:55.628 20:37:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2577410 00:05:55.628 20:37:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:55.628 20:37:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:55.628 20:37:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2577410' 00:05:55.628 killing process with pid 2577410 00:05:55.628 20:37:19 -- common/autotest_common.sh@955 -- # kill 2577410 00:05:55.628 20:37:19 -- common/autotest_common.sh@960 -- # wait 2577410 00:05:55.628 spdk_app_start is called in Round 0. 00:05:55.628 Shutdown signal received, stop current app iteration 00:05:55.628 Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 reinitialization... 00:05:55.628 spdk_app_start is called in Round 1. 00:05:55.628 Shutdown signal received, stop current app iteration 00:05:55.628 Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 reinitialization... 00:05:55.628 spdk_app_start is called in Round 2. 00:05:55.628 Shutdown signal received, stop current app iteration 00:05:55.628 Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 reinitialization... 00:05:55.628 spdk_app_start is called in Round 3. 00:05:55.628 Shutdown signal received, stop current app iteration 00:05:55.628 20:37:19 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:55.628 20:37:19 -- event/event.sh@42 -- # return 0 00:05:55.628 00:05:55.628 real 0m16.020s 00:05:55.628 user 0m35.106s 00:05:55.628 sys 0m2.303s 00:05:55.628 20:37:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.628 20:37:19 -- common/autotest_common.sh@10 -- # set +x 00:05:55.628 ************************************ 00:05:55.628 END TEST app_repeat 00:05:55.628 ************************************ 00:05:55.628 20:37:19 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:55.628 20:37:19 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.628 20:37:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.628 20:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.628 20:37:19 -- common/autotest_common.sh@10 -- # set +x 00:05:55.628 ************************************ 00:05:55.628 START TEST cpu_locks 00:05:55.628 ************************************ 00:05:55.628 20:37:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.628 * Looking for test storage... 00:05:55.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:55.628 20:37:20 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:55.628 20:37:20 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:55.628 20:37:20 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:55.628 20:37:20 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:55.628 20:37:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.628 20:37:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.628 20:37:20 -- common/autotest_common.sh@10 -- # set +x 00:05:55.890 ************************************ 00:05:55.890 START TEST default_locks 00:05:55.890 ************************************ 00:05:55.890 20:37:20 -- common/autotest_common.sh@1111 -- # default_locks 00:05:55.890 20:37:20 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2580755 00:05:55.890 20:37:20 -- event/cpu_locks.sh@47 -- # waitforlisten 2580755 00:05:55.890 20:37:20 -- common/autotest_common.sh@817 -- # '[' -z 2580755 ']' 00:05:55.890 20:37:20 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.890 20:37:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.890 20:37:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:55.890 20:37:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.890 20:37:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:55.890 20:37:20 -- common/autotest_common.sh@10 -- # set +x 00:05:55.890 [2024-04-24 20:37:20.430043] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:55.890 [2024-04-24 20:37:20.430102] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2580755 ] 00:05:55.890 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.890 [2024-04-24 20:37:20.489875] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.151 [2024-04-24 20:37:20.557080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.151 20:37:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:56.151 20:37:20 -- common/autotest_common.sh@850 -- # return 0 00:05:56.151 20:37:20 -- event/cpu_locks.sh@49 -- # locks_exist 2580755 00:05:56.151 20:37:20 -- event/cpu_locks.sh@22 -- # lslocks -p 2580755 00:05:56.151 20:37:20 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.724 lslocks: write error 00:05:56.724 20:37:21 -- event/cpu_locks.sh@50 -- # killprocess 2580755 00:05:56.724 20:37:21 -- common/autotest_common.sh@936 -- # '[' -z 2580755 ']' 00:05:56.724 20:37:21 -- common/autotest_common.sh@940 -- # kill -0 2580755 00:05:56.724 20:37:21 -- common/autotest_common.sh@941 -- # uname 00:05:56.724 20:37:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.724 20:37:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2580755 00:05:56.724 20:37:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.724 20:37:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.724 20:37:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2580755' 00:05:56.724 killing process with pid 2580755 00:05:56.724 20:37:21 -- common/autotest_common.sh@955 -- # kill 2580755 00:05:56.724 20:37:21 -- common/autotest_common.sh@960 -- # wait 2580755 00:05:57.005 20:37:21 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2580755 00:05:57.005 20:37:21 -- common/autotest_common.sh@638 -- # local es=0 00:05:57.005 20:37:21 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2580755 00:05:57.005 20:37:21 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:57.005 20:37:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:57.005 20:37:21 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:57.005 20:37:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:57.005 20:37:21 -- common/autotest_common.sh@641 -- # waitforlisten 2580755 00:05:57.005 20:37:21 -- common/autotest_common.sh@817 -- # '[' -z 2580755 ']' 00:05:57.005 20:37:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.005 20:37:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:57.005 20:37:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.005 20:37:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:57.005 20:37:21 -- common/autotest_common.sh@10 -- # set +x 00:05:57.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2580755) - No such process 00:05:57.005 ERROR: process (pid: 2580755) is no longer running 00:05:57.005 20:37:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:57.005 20:37:21 -- common/autotest_common.sh@850 -- # return 1 00:05:57.005 20:37:21 -- common/autotest_common.sh@641 -- # es=1 00:05:57.005 20:37:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:57.005 20:37:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:57.005 20:37:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:57.005 20:37:21 -- event/cpu_locks.sh@54 -- # no_locks 00:05:57.005 20:37:21 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.005 20:37:21 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.005 20:37:21 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.005 00:05:57.005 real 0m1.127s 00:05:57.005 user 0m1.138s 00:05:57.005 sys 0m0.511s 00:05:57.005 20:37:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.005 20:37:21 -- common/autotest_common.sh@10 -- # set +x 00:05:57.005 ************************************ 00:05:57.005 END TEST default_locks 00:05:57.005 ************************************ 00:05:57.005 20:37:21 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:57.005 20:37:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.005 20:37:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.005 20:37:21 -- common/autotest_common.sh@10 -- # set +x 00:05:57.274 ************************************ 00:05:57.274 START TEST default_locks_via_rpc 00:05:57.274 ************************************ 00:05:57.274 20:37:21 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:57.274 20:37:21 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2581056 00:05:57.274 20:37:21 -- event/cpu_locks.sh@63 -- # waitforlisten 2581056 00:05:57.274 20:37:21 -- common/autotest_common.sh@817 -- # '[' -z 2581056 ']' 00:05:57.274 20:37:21 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.274 20:37:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.274 20:37:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:57.274 20:37:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.274 20:37:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:57.274 20:37:21 -- common/autotest_common.sh@10 -- # set +x 00:05:57.274 [2024-04-24 20:37:21.738419] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:57.274 [2024-04-24 20:37:21.738478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2581056 ] 00:05:57.274 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.274 [2024-04-24 20:37:21.816165] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.274 [2024-04-24 20:37:21.885839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.218 20:37:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.218 20:37:22 -- common/autotest_common.sh@850 -- # return 0 00:05:58.218 20:37:22 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:58.218 20:37:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.218 20:37:22 -- common/autotest_common.sh@10 -- # set +x 00:05:58.218 20:37:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.218 20:37:22 -- event/cpu_locks.sh@67 -- # no_locks 00:05:58.218 20:37:22 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:58.218 20:37:22 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:58.218 20:37:22 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:58.218 20:37:22 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.218 20:37:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:58.218 20:37:22 -- common/autotest_common.sh@10 -- # set +x 00:05:58.218 20:37:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:58.218 20:37:22 -- event/cpu_locks.sh@71 -- # locks_exist 2581056 00:05:58.218 20:37:22 -- event/cpu_locks.sh@22 -- # lslocks -p 2581056 00:05:58.218 20:37:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.479 20:37:23 -- event/cpu_locks.sh@73 -- # killprocess 2581056 00:05:58.479 20:37:23 -- common/autotest_common.sh@936 -- # '[' -z 2581056 ']' 00:05:58.479 20:37:23 -- common/autotest_common.sh@940 -- # kill -0 2581056 00:05:58.479 20:37:23 -- common/autotest_common.sh@941 -- # uname 00:05:58.479 20:37:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:58.479 20:37:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2581056 00:05:58.479 20:37:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:58.479 20:37:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:58.479 20:37:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2581056' 00:05:58.479 killing process with pid 2581056 00:05:58.479 20:37:23 -- common/autotest_common.sh@955 -- # kill 2581056 00:05:58.479 20:37:23 -- common/autotest_common.sh@960 -- # wait 2581056 00:05:58.740 00:05:58.740 real 0m1.585s 00:05:58.740 user 0m1.763s 00:05:58.740 sys 0m0.501s 00:05:58.740 20:37:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.740 20:37:23 -- common/autotest_common.sh@10 -- # set +x 00:05:58.740 ************************************ 00:05:58.740 END TEST default_locks_via_rpc 00:05:58.740 ************************************ 00:05:58.740 20:37:23 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:58.740 20:37:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.740 20:37:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.740 20:37:23 -- common/autotest_common.sh@10 -- # set +x 00:05:59.001 ************************************ 00:05:59.001 START TEST non_locking_app_on_locked_coremask 00:05:59.001 ************************************ 00:05:59.001 20:37:23 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:59.001 20:37:23 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2581423 00:05:59.001 20:37:23 -- event/cpu_locks.sh@81 -- # waitforlisten 2581423 /var/tmp/spdk.sock 00:05:59.001 20:37:23 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.001 20:37:23 -- common/autotest_common.sh@817 -- # '[' -z 2581423 ']' 00:05:59.001 20:37:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.001 20:37:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.001 20:37:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.001 20:37:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.001 20:37:23 -- common/autotest_common.sh@10 -- # set +x 00:05:59.001 [2024-04-24 20:37:23.501257] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:59.001 [2024-04-24 20:37:23.501310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2581423 ] 00:05:59.001 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.001 [2024-04-24 20:37:23.579513] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.262 [2024-04-24 20:37:23.649428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.870 20:37:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:59.870 20:37:24 -- common/autotest_common.sh@850 -- # return 0 00:05:59.870 20:37:24 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2581756 00:05:59.870 20:37:24 -- event/cpu_locks.sh@85 -- # waitforlisten 2581756 /var/tmp/spdk2.sock 00:05:59.870 20:37:24 -- common/autotest_common.sh@817 -- # '[' -z 2581756 ']' 00:05:59.870 20:37:24 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:59.870 20:37:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.870 20:37:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.870 20:37:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.870 20:37:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.870 20:37:24 -- common/autotest_common.sh@10 -- # set +x 00:05:59.870 [2024-04-24 20:37:24.407225] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:05:59.870 [2024-04-24 20:37:24.407277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2581756 ] 00:05:59.870 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.130 [2024-04-24 20:37:24.491794] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.131 [2024-04-24 20:37:24.491821] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.131 [2024-04-24 20:37:24.618792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.700 20:37:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.701 20:37:25 -- common/autotest_common.sh@850 -- # return 0 00:06:00.701 20:37:25 -- event/cpu_locks.sh@87 -- # locks_exist 2581423 00:06:00.701 20:37:25 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.701 20:37:25 -- event/cpu_locks.sh@22 -- # lslocks -p 2581423 00:06:01.272 lslocks: write error 00:06:01.272 20:37:25 -- event/cpu_locks.sh@89 -- # killprocess 2581423 00:06:01.272 20:37:25 -- common/autotest_common.sh@936 -- # '[' -z 2581423 ']' 00:06:01.272 20:37:25 -- common/autotest_common.sh@940 -- # kill -0 2581423 00:06:01.272 20:37:25 -- common/autotest_common.sh@941 -- # uname 00:06:01.272 20:37:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.272 20:37:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2581423 00:06:01.272 20:37:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.272 20:37:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.272 20:37:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2581423' 00:06:01.272 killing process with pid 2581423 00:06:01.272 20:37:25 -- common/autotest_common.sh@955 -- # kill 2581423 00:06:01.272 20:37:25 -- common/autotest_common.sh@960 -- # wait 2581423 00:06:01.532 20:37:26 -- event/cpu_locks.sh@90 -- # killprocess 2581756 00:06:01.532 20:37:26 -- common/autotest_common.sh@936 -- # '[' -z 2581756 ']' 00:06:01.532 20:37:26 -- common/autotest_common.sh@940 -- # kill -0 2581756 00:06:01.532 20:37:26 -- common/autotest_common.sh@941 -- # uname 00:06:01.532 20:37:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.532 20:37:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2581756 00:06:01.792 20:37:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.793 20:37:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.793 20:37:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2581756' 00:06:01.793 killing process with pid 2581756 00:06:01.793 20:37:26 -- common/autotest_common.sh@955 -- # kill 2581756 00:06:01.793 20:37:26 -- common/autotest_common.sh@960 -- # wait 2581756 00:06:01.793 00:06:01.793 real 0m2.946s 00:06:01.793 user 0m3.344s 00:06:01.793 sys 0m0.828s 00:06:01.793 20:37:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.793 20:37:26 -- common/autotest_common.sh@10 -- # set +x 00:06:01.793 ************************************ 00:06:01.793 END TEST non_locking_app_on_locked_coremask 00:06:01.793 ************************************ 00:06:01.793 20:37:26 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:01.793 20:37:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.793 20:37:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.793 20:37:26 -- common/autotest_common.sh@10 -- # set +x 00:06:02.054 ************************************ 00:06:02.054 START TEST locking_app_on_unlocked_coremask 00:06:02.054 ************************************ 00:06:02.054 20:37:26 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:06:02.054 20:37:26 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2582137 00:06:02.054 20:37:26 -- event/cpu_locks.sh@99 -- # waitforlisten 2582137 /var/tmp/spdk.sock 00:06:02.054 20:37:26 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:02.054 20:37:26 -- common/autotest_common.sh@817 -- # '[' -z 2582137 ']' 00:06:02.054 20:37:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.054 20:37:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:02.054 20:37:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.054 20:37:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:02.054 20:37:26 -- common/autotest_common.sh@10 -- # set +x 00:06:02.054 [2024-04-24 20:37:26.623579] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:02.054 [2024-04-24 20:37:26.623635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582137 ] 00:06:02.054 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.314 [2024-04-24 20:37:26.701392] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:02.314 [2024-04-24 20:37:26.701423] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.314 [2024-04-24 20:37:26.771019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.885 20:37:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:02.885 20:37:27 -- common/autotest_common.sh@850 -- # return 0 00:06:02.885 20:37:27 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.885 20:37:27 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2582389 00:06:02.885 20:37:27 -- event/cpu_locks.sh@103 -- # waitforlisten 2582389 /var/tmp/spdk2.sock 00:06:02.885 20:37:27 -- common/autotest_common.sh@817 -- # '[' -z 2582389 ']' 00:06:02.885 20:37:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.885 20:37:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:02.885 20:37:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.885 20:37:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:02.885 20:37:27 -- common/autotest_common.sh@10 -- # set +x 00:06:02.885 [2024-04-24 20:37:27.516505] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:02.885 [2024-04-24 20:37:27.516554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582389 ] 00:06:03.146 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.146 [2024-04-24 20:37:27.603075] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.146 [2024-04-24 20:37:27.731267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.087 20:37:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:04.087 20:37:28 -- common/autotest_common.sh@850 -- # return 0 00:06:04.087 20:37:28 -- event/cpu_locks.sh@105 -- # locks_exist 2582389 00:06:04.087 20:37:28 -- event/cpu_locks.sh@22 -- # lslocks -p 2582389 00:06:04.087 20:37:28 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.348 lslocks: write error 00:06:04.348 20:37:28 -- event/cpu_locks.sh@107 -- # killprocess 2582137 00:06:04.348 20:37:28 -- common/autotest_common.sh@936 -- # '[' -z 2582137 ']' 00:06:04.348 20:37:28 -- common/autotest_common.sh@940 -- # kill -0 2582137 00:06:04.348 20:37:28 -- common/autotest_common.sh@941 -- # uname 00:06:04.348 20:37:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.348 20:37:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2582137 00:06:04.348 20:37:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.348 20:37:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.348 20:37:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2582137' 00:06:04.348 killing process with pid 2582137 00:06:04.348 20:37:28 -- common/autotest_common.sh@955 -- # kill 2582137 00:06:04.348 20:37:28 -- common/autotest_common.sh@960 -- # wait 2582137 00:06:04.917 20:37:29 -- event/cpu_locks.sh@108 -- # killprocess 2582389 00:06:04.917 20:37:29 -- common/autotest_common.sh@936 -- # '[' -z 2582389 ']' 00:06:04.917 20:37:29 -- common/autotest_common.sh@940 -- # kill -0 2582389 00:06:04.917 20:37:29 -- common/autotest_common.sh@941 -- # uname 00:06:04.917 20:37:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:04.917 20:37:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2582389 00:06:04.917 20:37:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:04.917 20:37:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:04.918 20:37:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2582389' 00:06:04.918 killing process with pid 2582389 00:06:04.918 20:37:29 -- common/autotest_common.sh@955 -- # kill 2582389 00:06:04.918 20:37:29 -- common/autotest_common.sh@960 -- # wait 2582389 00:06:05.178 00:06:05.178 real 0m3.079s 00:06:05.178 user 0m3.489s 00:06:05.178 sys 0m0.875s 00:06:05.178 20:37:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.178 20:37:29 -- common/autotest_common.sh@10 -- # set +x 00:06:05.178 ************************************ 00:06:05.178 END TEST locking_app_on_unlocked_coremask 00:06:05.178 ************************************ 00:06:05.178 20:37:29 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:05.178 20:37:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:05.178 20:37:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.178 20:37:29 -- common/autotest_common.sh@10 -- # set +x 00:06:05.438 ************************************ 00:06:05.438 START TEST locking_app_on_locked_coremask 00:06:05.438 ************************************ 00:06:05.439 20:37:29 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:06:05.439 20:37:29 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2582861 00:06:05.439 20:37:29 -- event/cpu_locks.sh@116 -- # waitforlisten 2582861 /var/tmp/spdk.sock 00:06:05.439 20:37:29 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.439 20:37:29 -- common/autotest_common.sh@817 -- # '[' -z 2582861 ']' 00:06:05.439 20:37:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.439 20:37:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:05.439 20:37:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.439 20:37:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:05.439 20:37:29 -- common/autotest_common.sh@10 -- # set +x 00:06:05.439 [2024-04-24 20:37:29.889829] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:05.439 [2024-04-24 20:37:29.889884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582861 ] 00:06:05.439 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.439 [2024-04-24 20:37:29.969054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.439 [2024-04-24 20:37:30.041362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.380 20:37:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:06.380 20:37:30 -- common/autotest_common.sh@850 -- # return 0 00:06:06.380 20:37:30 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2583050 00:06:06.380 20:37:30 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2583050 /var/tmp/spdk2.sock 00:06:06.380 20:37:30 -- common/autotest_common.sh@638 -- # local es=0 00:06:06.380 20:37:30 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.380 20:37:30 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2583050 /var/tmp/spdk2.sock 00:06:06.380 20:37:30 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:06.380 20:37:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:06.380 20:37:30 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:06.380 20:37:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:06.380 20:37:30 -- common/autotest_common.sh@641 -- # waitforlisten 2583050 /var/tmp/spdk2.sock 00:06:06.380 20:37:30 -- common/autotest_common.sh@817 -- # '[' -z 2583050 ']' 00:06:06.380 20:37:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.380 20:37:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:06.380 20:37:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.380 20:37:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:06.380 20:37:30 -- common/autotest_common.sh@10 -- # set +x 00:06:06.380 [2024-04-24 20:37:30.829533] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:06.380 [2024-04-24 20:37:30.829596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2583050 ] 00:06:06.380 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.380 [2024-04-24 20:37:30.915716] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2582861 has claimed it. 00:06:06.380 [2024-04-24 20:37:30.919759] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2583050) - No such process 00:06:06.950 ERROR: process (pid: 2583050) is no longer running 00:06:06.950 20:37:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:06.950 20:37:31 -- common/autotest_common.sh@850 -- # return 1 00:06:06.950 20:37:31 -- common/autotest_common.sh@641 -- # es=1 00:06:06.950 20:37:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:06.950 20:37:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:06.950 20:37:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:06.950 20:37:31 -- event/cpu_locks.sh@122 -- # locks_exist 2582861 00:06:06.950 20:37:31 -- event/cpu_locks.sh@22 -- # lslocks -p 2582861 00:06:06.950 20:37:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.521 lslocks: write error 00:06:07.521 20:37:31 -- event/cpu_locks.sh@124 -- # killprocess 2582861 00:06:07.521 20:37:31 -- common/autotest_common.sh@936 -- # '[' -z 2582861 ']' 00:06:07.521 20:37:31 -- common/autotest_common.sh@940 -- # kill -0 2582861 00:06:07.521 20:37:31 -- common/autotest_common.sh@941 -- # uname 00:06:07.521 20:37:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:07.521 20:37:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2582861 00:06:07.521 20:37:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:07.521 20:37:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:07.521 20:37:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2582861' 00:06:07.521 killing process with pid 2582861 00:06:07.521 20:37:31 -- common/autotest_common.sh@955 -- # kill 2582861 00:06:07.521 20:37:31 -- common/autotest_common.sh@960 -- # wait 2582861 00:06:07.782 00:06:07.782 real 0m2.339s 00:06:07.782 user 0m2.695s 00:06:07.782 sys 0m0.642s 00:06:07.782 20:37:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.782 20:37:32 -- common/autotest_common.sh@10 -- # set +x 00:06:07.782 ************************************ 00:06:07.782 END TEST locking_app_on_locked_coremask 00:06:07.782 ************************************ 00:06:07.782 20:37:32 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:07.782 20:37:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.782 20:37:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.782 20:37:32 -- common/autotest_common.sh@10 -- # set +x 00:06:07.782 ************************************ 00:06:07.782 START TEST locking_overlapped_coremask 00:06:07.782 ************************************ 00:06:07.782 20:37:32 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:06:07.782 20:37:32 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2583377 00:06:07.782 20:37:32 -- event/cpu_locks.sh@133 -- # waitforlisten 2583377 /var/tmp/spdk.sock 00:06:07.782 20:37:32 -- common/autotest_common.sh@817 -- # '[' -z 2583377 ']' 00:06:07.782 20:37:32 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:07.782 20:37:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.782 20:37:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:07.782 20:37:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.782 20:37:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:07.782 20:37:32 -- common/autotest_common.sh@10 -- # set +x 00:06:07.782 [2024-04-24 20:37:32.413113] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:07.782 [2024-04-24 20:37:32.413171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2583377 ] 00:06:08.043 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.043 [2024-04-24 20:37:32.492805] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.043 [2024-04-24 20:37:32.563737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.043 [2024-04-24 20:37:32.563829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.043 [2024-04-24 20:37:32.563832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.983 20:37:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:08.983 20:37:33 -- common/autotest_common.sh@850 -- # return 0 00:06:08.983 20:37:33 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2583577 00:06:08.983 20:37:33 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2583577 /var/tmp/spdk2.sock 00:06:08.983 20:37:33 -- common/autotest_common.sh@638 -- # local es=0 00:06:08.983 20:37:33 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:08.983 20:37:33 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2583577 /var/tmp/spdk2.sock 00:06:08.983 20:37:33 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:08.983 20:37:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:08.983 20:37:33 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:08.983 20:37:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:08.983 20:37:33 -- common/autotest_common.sh@641 -- # waitforlisten 2583577 /var/tmp/spdk2.sock 00:06:08.983 20:37:33 -- common/autotest_common.sh@817 -- # '[' -z 2583577 ']' 00:06:08.983 20:37:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.983 20:37:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:08.983 20:37:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.983 20:37:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:08.983 20:37:33 -- common/autotest_common.sh@10 -- # set +x 00:06:08.983 [2024-04-24 20:37:33.335098] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:08.983 [2024-04-24 20:37:33.335153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2583577 ] 00:06:08.983 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.983 [2024-04-24 20:37:33.408062] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2583377 has claimed it. 00:06:08.983 [2024-04-24 20:37:33.408092] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2583577) - No such process 00:06:09.556 ERROR: process (pid: 2583577) is no longer running 00:06:09.556 20:37:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:09.556 20:37:33 -- common/autotest_common.sh@850 -- # return 1 00:06:09.556 20:37:33 -- common/autotest_common.sh@641 -- # es=1 00:06:09.556 20:37:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:09.556 20:37:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:09.556 20:37:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:09.556 20:37:33 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:09.556 20:37:33 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.556 20:37:33 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.556 20:37:33 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.556 20:37:33 -- event/cpu_locks.sh@141 -- # killprocess 2583377 00:06:09.556 20:37:33 -- common/autotest_common.sh@936 -- # '[' -z 2583377 ']' 00:06:09.556 20:37:33 -- common/autotest_common.sh@940 -- # kill -0 2583377 00:06:09.556 20:37:33 -- common/autotest_common.sh@941 -- # uname 00:06:09.556 20:37:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.556 20:37:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2583377 00:06:09.556 20:37:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.556 20:37:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.556 20:37:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2583377' 00:06:09.556 killing process with pid 2583377 00:06:09.556 20:37:34 -- common/autotest_common.sh@955 -- # kill 2583377 00:06:09.556 20:37:34 -- common/autotest_common.sh@960 -- # wait 2583377 00:06:09.817 00:06:09.817 real 0m1.899s 00:06:09.817 user 0m5.469s 00:06:09.817 sys 0m0.408s 00:06:09.817 20:37:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:09.817 20:37:34 -- common/autotest_common.sh@10 -- # set +x 00:06:09.817 ************************************ 00:06:09.817 END TEST locking_overlapped_coremask 00:06:09.817 ************************************ 00:06:09.817 20:37:34 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:09.817 20:37:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.817 20:37:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.817 20:37:34 -- common/autotest_common.sh@10 -- # set +x 00:06:09.817 ************************************ 00:06:09.817 START TEST locking_overlapped_coremask_via_rpc 00:06:09.817 ************************************ 00:06:09.817 20:37:34 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:09.817 20:37:34 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2583941 00:06:09.817 20:37:34 -- event/cpu_locks.sh@149 -- # waitforlisten 2583941 /var/tmp/spdk.sock 00:06:09.817 20:37:34 -- common/autotest_common.sh@817 -- # '[' -z 2583941 ']' 00:06:09.817 20:37:34 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:09.817 20:37:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.817 20:37:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:09.817 20:37:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.817 20:37:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:09.817 20:37:34 -- common/autotest_common.sh@10 -- # set +x 00:06:10.077 [2024-04-24 20:37:34.502522] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:10.077 [2024-04-24 20:37:34.502569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2583941 ] 00:06:10.077 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.077 [2024-04-24 20:37:34.576820] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.077 [2024-04-24 20:37:34.576845] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.077 [2024-04-24 20:37:34.641985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.077 [2024-04-24 20:37:34.642119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.077 [2024-04-24 20:37:34.642122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.018 20:37:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:11.018 20:37:35 -- common/autotest_common.sh@850 -- # return 0 00:06:11.018 20:37:35 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2583962 00:06:11.018 20:37:35 -- event/cpu_locks.sh@153 -- # waitforlisten 2583962 /var/tmp/spdk2.sock 00:06:11.018 20:37:35 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:11.018 20:37:35 -- common/autotest_common.sh@817 -- # '[' -z 2583962 ']' 00:06:11.018 20:37:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.018 20:37:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:11.018 20:37:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.018 20:37:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:11.018 20:37:35 -- common/autotest_common.sh@10 -- # set +x 00:06:11.018 [2024-04-24 20:37:35.401003] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:11.018 [2024-04-24 20:37:35.401054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2583962 ] 00:06:11.018 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.018 [2024-04-24 20:37:35.470995] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.018 [2024-04-24 20:37:35.471017] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.018 [2024-04-24 20:37:35.575324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.018 [2024-04-24 20:37:35.578870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.018 [2024-04-24 20:37:35.578872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:11.962 20:37:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:11.962 20:37:36 -- common/autotest_common.sh@850 -- # return 0 00:06:11.962 20:37:36 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.962 20:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:11.962 20:37:36 -- common/autotest_common.sh@10 -- # set +x 00:06:11.962 20:37:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:11.962 20:37:36 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.962 20:37:36 -- common/autotest_common.sh@638 -- # local es=0 00:06:11.962 20:37:36 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.962 20:37:36 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:11.962 20:37:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:11.962 20:37:36 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:11.962 20:37:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:11.962 20:37:36 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.962 20:37:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:11.962 20:37:36 -- common/autotest_common.sh@10 -- # set +x 00:06:11.962 [2024-04-24 20:37:36.290789] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2583941 has claimed it. 00:06:11.962 request: 00:06:11.962 { 00:06:11.962 "method": "framework_enable_cpumask_locks", 00:06:11.962 "req_id": 1 00:06:11.962 } 00:06:11.962 Got JSON-RPC error response 00:06:11.962 response: 00:06:11.962 { 00:06:11.962 "code": -32603, 00:06:11.962 "message": "Failed to claim CPU core: 2" 00:06:11.962 } 00:06:11.962 20:37:36 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:11.962 20:37:36 -- common/autotest_common.sh@641 -- # es=1 00:06:11.962 20:37:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:11.962 20:37:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:11.962 20:37:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:11.962 20:37:36 -- event/cpu_locks.sh@158 -- # waitforlisten 2583941 /var/tmp/spdk.sock 00:06:11.962 20:37:36 -- common/autotest_common.sh@817 -- # '[' -z 2583941 ']' 00:06:11.962 20:37:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.962 20:37:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:11.962 20:37:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.962 20:37:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:11.962 20:37:36 -- common/autotest_common.sh@10 -- # set +x 00:06:11.962 20:37:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:11.962 20:37:36 -- common/autotest_common.sh@850 -- # return 0 00:06:11.962 20:37:36 -- event/cpu_locks.sh@159 -- # waitforlisten 2583962 /var/tmp/spdk2.sock 00:06:11.962 20:37:36 -- common/autotest_common.sh@817 -- # '[' -z 2583962 ']' 00:06:11.962 20:37:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.962 20:37:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:11.962 20:37:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.962 20:37:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:11.962 20:37:36 -- common/autotest_common.sh@10 -- # set +x 00:06:12.223 20:37:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:12.223 20:37:36 -- common/autotest_common.sh@850 -- # return 0 00:06:12.223 20:37:36 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:12.223 20:37:36 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.223 20:37:36 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.223 20:37:36 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.223 00:06:12.223 real 0m2.291s 00:06:12.223 user 0m1.028s 00:06:12.223 sys 0m0.177s 00:06:12.223 20:37:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.223 20:37:36 -- common/autotest_common.sh@10 -- # set +x 00:06:12.223 ************************************ 00:06:12.223 END TEST locking_overlapped_coremask_via_rpc 00:06:12.223 ************************************ 00:06:12.223 20:37:36 -- event/cpu_locks.sh@174 -- # cleanup 00:06:12.223 20:37:36 -- event/cpu_locks.sh@15 -- # [[ -z 2583941 ]] 00:06:12.223 20:37:36 -- event/cpu_locks.sh@15 -- # killprocess 2583941 00:06:12.223 20:37:36 -- common/autotest_common.sh@936 -- # '[' -z 2583941 ']' 00:06:12.223 20:37:36 -- common/autotest_common.sh@940 -- # kill -0 2583941 00:06:12.223 20:37:36 -- common/autotest_common.sh@941 -- # uname 00:06:12.223 20:37:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.223 20:37:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2583941 00:06:12.223 20:37:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:12.223 20:37:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:12.223 20:37:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2583941' 00:06:12.223 killing process with pid 2583941 00:06:12.223 20:37:36 -- common/autotest_common.sh@955 -- # kill 2583941 00:06:12.223 20:37:36 -- common/autotest_common.sh@960 -- # wait 2583941 00:06:12.487 20:37:37 -- event/cpu_locks.sh@16 -- # [[ -z 2583962 ]] 00:06:12.487 20:37:37 -- event/cpu_locks.sh@16 -- # killprocess 2583962 00:06:12.487 20:37:37 -- common/autotest_common.sh@936 -- # '[' -z 2583962 ']' 00:06:12.487 20:37:37 -- common/autotest_common.sh@940 -- # kill -0 2583962 00:06:12.487 20:37:37 -- common/autotest_common.sh@941 -- # uname 00:06:12.487 20:37:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:12.487 20:37:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2583962 00:06:12.487 20:37:37 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:12.487 20:37:37 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:12.487 20:37:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2583962' 00:06:12.487 killing process with pid 2583962 00:06:12.487 20:37:37 -- common/autotest_common.sh@955 -- # kill 2583962 00:06:12.487 20:37:37 -- common/autotest_common.sh@960 -- # wait 2583962 00:06:12.754 20:37:37 -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.754 20:37:37 -- event/cpu_locks.sh@1 -- # cleanup 00:06:12.754 20:37:37 -- event/cpu_locks.sh@15 -- # [[ -z 2583941 ]] 00:06:12.754 20:37:37 -- event/cpu_locks.sh@15 -- # killprocess 2583941 00:06:12.754 20:37:37 -- common/autotest_common.sh@936 -- # '[' -z 2583941 ']' 00:06:12.754 20:37:37 -- common/autotest_common.sh@940 -- # kill -0 2583941 00:06:12.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2583941) - No such process 00:06:12.754 20:37:37 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2583941 is not found' 00:06:12.754 Process with pid 2583941 is not found 00:06:12.755 20:37:37 -- event/cpu_locks.sh@16 -- # [[ -z 2583962 ]] 00:06:12.755 20:37:37 -- event/cpu_locks.sh@16 -- # killprocess 2583962 00:06:12.755 20:37:37 -- common/autotest_common.sh@936 -- # '[' -z 2583962 ']' 00:06:12.755 20:37:37 -- common/autotest_common.sh@940 -- # kill -0 2583962 00:06:12.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2583962) - No such process 00:06:12.755 20:37:37 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2583962 is not found' 00:06:12.755 Process with pid 2583962 is not found 00:06:12.755 20:37:37 -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.755 00:06:12.755 real 0m17.183s 00:06:12.755 user 0m30.045s 00:06:12.755 sys 0m5.139s 00:06:12.755 20:37:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.755 20:37:37 -- common/autotest_common.sh@10 -- # set +x 00:06:12.755 ************************************ 00:06:12.755 END TEST cpu_locks 00:06:12.755 ************************************ 00:06:12.755 00:06:12.755 real 0m44.398s 00:06:12.755 user 1m24.239s 00:06:12.755 sys 0m8.813s 00:06:12.755 20:37:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:12.755 20:37:37 -- common/autotest_common.sh@10 -- # set +x 00:06:12.755 ************************************ 00:06:12.755 END TEST event 00:06:12.755 ************************************ 00:06:12.755 20:37:37 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:12.755 20:37:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:12.755 20:37:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.755 20:37:37 -- common/autotest_common.sh@10 -- # set +x 00:06:13.016 ************************************ 00:06:13.016 START TEST thread 00:06:13.016 ************************************ 00:06:13.016 20:37:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:13.016 * Looking for test storage... 00:06:13.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:13.016 20:37:37 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.016 20:37:37 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:13.016 20:37:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.016 20:37:37 -- common/autotest_common.sh@10 -- # set +x 00:06:13.277 ************************************ 00:06:13.277 START TEST thread_poller_perf 00:06:13.277 ************************************ 00:06:13.277 20:37:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.277 [2024-04-24 20:37:37.807327] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:13.277 [2024-04-24 20:37:37.807426] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2584696 ] 00:06:13.277 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.277 [2024-04-24 20:37:37.888971] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.537 [2024-04-24 20:37:37.964675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.537 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:14.477 ====================================== 00:06:14.477 busy:2408415316 (cyc) 00:06:14.477 total_run_count: 288000 00:06:14.477 tsc_hz: 2400000000 (cyc) 00:06:14.477 ====================================== 00:06:14.477 poller_cost: 8362 (cyc), 3484 (nsec) 00:06:14.477 00:06:14.477 real 0m1.239s 00:06:14.477 user 0m1.145s 00:06:14.477 sys 0m0.090s 00:06:14.477 20:37:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:14.477 20:37:39 -- common/autotest_common.sh@10 -- # set +x 00:06:14.477 ************************************ 00:06:14.477 END TEST thread_poller_perf 00:06:14.478 ************************************ 00:06:14.478 20:37:39 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.478 20:37:39 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:14.478 20:37:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:14.478 20:37:39 -- common/autotest_common.sh@10 -- # set +x 00:06:14.738 ************************************ 00:06:14.738 START TEST thread_poller_perf 00:06:14.738 ************************************ 00:06:14.738 20:37:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.738 [2024-04-24 20:37:39.235825] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:14.738 [2024-04-24 20:37:39.235915] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2584923 ] 00:06:14.738 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.738 [2024-04-24 20:37:39.318035] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.998 [2024-04-24 20:37:39.393822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.998 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:15.937 ====================================== 00:06:15.937 busy:2401916530 (cyc) 00:06:15.937 total_run_count: 3696000 00:06:15.937 tsc_hz: 2400000000 (cyc) 00:06:15.937 ====================================== 00:06:15.937 poller_cost: 649 (cyc), 270 (nsec) 00:06:15.937 00:06:15.937 real 0m1.236s 00:06:15.937 user 0m1.141s 00:06:15.937 sys 0m0.090s 00:06:15.937 20:37:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.937 20:37:40 -- common/autotest_common.sh@10 -- # set +x 00:06:15.937 ************************************ 00:06:15.937 END TEST thread_poller_perf 00:06:15.937 ************************************ 00:06:15.937 20:37:40 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:15.937 00:06:15.937 real 0m2.963s 00:06:15.937 user 0m2.467s 00:06:15.937 sys 0m0.467s 00:06:15.937 20:37:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.937 20:37:40 -- common/autotest_common.sh@10 -- # set +x 00:06:15.937 ************************************ 00:06:15.937 END TEST thread 00:06:15.937 ************************************ 00:06:15.937 20:37:40 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:15.937 20:37:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.937 20:37:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.937 20:37:40 -- common/autotest_common.sh@10 -- # set +x 00:06:16.197 ************************************ 00:06:16.197 START TEST accel 00:06:16.197 ************************************ 00:06:16.197 20:37:40 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:16.197 * Looking for test storage... 00:06:16.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:16.197 20:37:40 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:16.197 20:37:40 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:16.197 20:37:40 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:16.197 20:37:40 -- accel/accel.sh@62 -- # spdk_tgt_pid=2585237 00:06:16.197 20:37:40 -- accel/accel.sh@63 -- # waitforlisten 2585237 00:06:16.197 20:37:40 -- common/autotest_common.sh@817 -- # '[' -z 2585237 ']' 00:06:16.197 20:37:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.197 20:37:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:16.197 20:37:40 -- accel/accel.sh@61 -- # build_accel_config 00:06:16.197 20:37:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.197 20:37:40 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.197 20:37:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.197 20:37:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.197 20:37:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.197 20:37:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.197 20:37:40 -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:16.197 20:37:40 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.197 20:37:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:16.197 20:37:40 -- accel/accel.sh@41 -- # jq -r . 00:06:16.197 20:37:40 -- common/autotest_common.sh@10 -- # set +x 00:06:16.458 [2024-04-24 20:37:40.838850] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:16.458 [2024-04-24 20:37:40.838906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2585237 ] 00:06:16.458 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.458 [2024-04-24 20:37:40.896863] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.458 [2024-04-24 20:37:40.960315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.718 20:37:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:16.718 20:37:41 -- common/autotest_common.sh@850 -- # return 0 00:06:16.718 20:37:41 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:16.718 20:37:41 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:16.718 20:37:41 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:16.718 20:37:41 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:16.718 20:37:41 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:16.718 20:37:41 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:16.718 20:37:41 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:16.718 20:37:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:16.718 20:37:41 -- common/autotest_common.sh@10 -- # set +x 00:06:16.718 20:37:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:16.718 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.718 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.718 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.718 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.718 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.718 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.718 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.718 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.718 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.718 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.718 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.718 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.718 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.718 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.719 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.719 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.719 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.719 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.719 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.719 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.719 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.719 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.719 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.719 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.719 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.719 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.719 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.719 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.719 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.719 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.719 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.719 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.719 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.719 20:37:41 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # IFS== 00:06:16.719 20:37:41 -- accel/accel.sh@72 -- # read -r opc module 00:06:16.719 20:37:41 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:16.719 20:37:41 -- accel/accel.sh@75 -- # killprocess 2585237 00:06:16.719 20:37:41 -- common/autotest_common.sh@936 -- # '[' -z 2585237 ']' 00:06:16.719 20:37:41 -- common/autotest_common.sh@940 -- # kill -0 2585237 00:06:16.719 20:37:41 -- common/autotest_common.sh@941 -- # uname 00:06:16.719 20:37:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.719 20:37:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2585237 00:06:16.719 20:37:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.719 20:37:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.719 20:37:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2585237' 00:06:16.719 killing process with pid 2585237 00:06:16.719 20:37:41 -- common/autotest_common.sh@955 -- # kill 2585237 00:06:16.719 20:37:41 -- common/autotest_common.sh@960 -- # wait 2585237 00:06:16.979 20:37:41 -- accel/accel.sh@76 -- # trap - ERR 00:06:16.979 20:37:41 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:16.979 20:37:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:16.979 20:37:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.979 20:37:41 -- common/autotest_common.sh@10 -- # set +x 00:06:16.979 20:37:41 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:16.979 20:37:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.979 20:37:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.979 20:37:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.979 20:37:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:16.979 20:37:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.979 20:37:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.979 20:37:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.979 20:37:41 -- accel/accel.sh@40 -- # local IFS=, 00:06:16.979 20:37:41 -- accel/accel.sh@41 -- # jq -r . 00:06:17.240 20:37:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.240 20:37:41 -- common/autotest_common.sh@10 -- # set +x 00:06:17.240 20:37:41 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:17.240 20:37:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:17.240 20:37:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.240 20:37:41 -- common/autotest_common.sh@10 -- # set +x 00:06:17.240 ************************************ 00:06:17.240 START TEST accel_missing_filename 00:06:17.240 ************************************ 00:06:17.240 20:37:41 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:17.240 20:37:41 -- common/autotest_common.sh@638 -- # local es=0 00:06:17.240 20:37:41 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:17.240 20:37:41 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:17.240 20:37:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:17.240 20:37:41 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:17.240 20:37:41 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:17.240 20:37:41 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:17.240 20:37:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:17.240 20:37:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.240 20:37:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.240 20:37:41 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.240 20:37:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.240 20:37:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.240 20:37:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.240 20:37:41 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.240 20:37:41 -- accel/accel.sh@41 -- # jq -r . 00:06:17.240 [2024-04-24 20:37:41.836346] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:17.240 [2024-04-24 20:37:41.836445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2585551 ] 00:06:17.240 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.501 [2024-04-24 20:37:41.916247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.501 [2024-04-24 20:37:41.992398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.501 [2024-04-24 20:37:42.024984] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:17.501 [2024-04-24 20:37:42.062636] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:17.501 A filename is required. 00:06:17.501 20:37:42 -- common/autotest_common.sh@641 -- # es=234 00:06:17.501 20:37:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:17.501 20:37:42 -- common/autotest_common.sh@650 -- # es=106 00:06:17.501 20:37:42 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:17.501 20:37:42 -- common/autotest_common.sh@658 -- # es=1 00:06:17.501 20:37:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:17.501 00:06:17.501 real 0m0.310s 00:06:17.501 user 0m0.239s 00:06:17.501 sys 0m0.113s 00:06:17.501 20:37:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:17.501 20:37:42 -- common/autotest_common.sh@10 -- # set +x 00:06:17.501 ************************************ 00:06:17.501 END TEST accel_missing_filename 00:06:17.501 ************************************ 00:06:17.761 20:37:42 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.761 20:37:42 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:17.761 20:37:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.761 20:37:42 -- common/autotest_common.sh@10 -- # set +x 00:06:17.761 ************************************ 00:06:17.761 START TEST accel_compress_verify 00:06:17.761 ************************************ 00:06:17.761 20:37:42 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.761 20:37:42 -- common/autotest_common.sh@638 -- # local es=0 00:06:17.761 20:37:42 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.761 20:37:42 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:17.761 20:37:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:17.761 20:37:42 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:17.761 20:37:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:17.761 20:37:42 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.761 20:37:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:17.761 20:37:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.761 20:37:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.761 20:37:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.761 20:37:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.761 20:37:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.761 20:37:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.761 20:37:42 -- accel/accel.sh@40 -- # local IFS=, 00:06:17.761 20:37:42 -- accel/accel.sh@41 -- # jq -r . 00:06:17.761 [2024-04-24 20:37:42.289440] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:17.761 [2024-04-24 20:37:42.289476] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2585591 ] 00:06:17.761 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.761 [2024-04-24 20:37:42.343717] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.022 [2024-04-24 20:37:42.409609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.022 [2024-04-24 20:37:42.441523] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:18.022 [2024-04-24 20:37:42.478363] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:18.022 00:06:18.022 Compression does not support the verify option, aborting. 00:06:18.022 20:37:42 -- common/autotest_common.sh@641 -- # es=161 00:06:18.022 20:37:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:18.022 20:37:42 -- common/autotest_common.sh@650 -- # es=33 00:06:18.022 20:37:42 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:18.022 20:37:42 -- common/autotest_common.sh@658 -- # es=1 00:06:18.022 20:37:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:18.022 00:06:18.022 real 0m0.256s 00:06:18.022 user 0m0.199s 00:06:18.022 sys 0m0.096s 00:06:18.022 20:37:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.022 20:37:42 -- common/autotest_common.sh@10 -- # set +x 00:06:18.022 ************************************ 00:06:18.022 END TEST accel_compress_verify 00:06:18.022 ************************************ 00:06:18.022 20:37:42 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:18.022 20:37:42 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:18.022 20:37:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.022 20:37:42 -- common/autotest_common.sh@10 -- # set +x 00:06:18.283 ************************************ 00:06:18.283 START TEST accel_wrong_workload 00:06:18.283 ************************************ 00:06:18.283 20:37:42 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:18.283 20:37:42 -- common/autotest_common.sh@638 -- # local es=0 00:06:18.283 20:37:42 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:18.283 20:37:42 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:18.283 20:37:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:18.283 20:37:42 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:18.283 20:37:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:18.283 20:37:42 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:18.283 20:37:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:18.283 20:37:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.283 20:37:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.283 20:37:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.283 20:37:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.283 20:37:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.283 20:37:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.283 20:37:42 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.283 20:37:42 -- accel/accel.sh@41 -- # jq -r . 00:06:18.284 Unsupported workload type: foobar 00:06:18.284 [2024-04-24 20:37:42.741128] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:18.284 accel_perf options: 00:06:18.284 [-h help message] 00:06:18.284 [-q queue depth per core] 00:06:18.284 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.284 [-T number of threads per core 00:06:18.284 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.284 [-t time in seconds] 00:06:18.284 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.284 [ dif_verify, , dif_generate, dif_generate_copy 00:06:18.284 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.284 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.284 [-S for crc32c workload, use this seed value (default 0) 00:06:18.284 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.284 [-f for fill workload, use this BYTE value (default 255) 00:06:18.284 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.284 [-y verify result if this switch is on] 00:06:18.284 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.284 Can be used to spread operations across a wider range of memory. 00:06:18.284 20:37:42 -- common/autotest_common.sh@641 -- # es=1 00:06:18.284 20:37:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:18.284 20:37:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:18.284 20:37:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:18.284 00:06:18.284 real 0m0.036s 00:06:18.284 user 0m0.020s 00:06:18.284 sys 0m0.016s 00:06:18.284 20:37:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.284 20:37:42 -- common/autotest_common.sh@10 -- # set +x 00:06:18.284 ************************************ 00:06:18.284 END TEST accel_wrong_workload 00:06:18.284 ************************************ 00:06:18.284 Error: writing output failed: Broken pipe 00:06:18.284 20:37:42 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:18.284 20:37:42 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:18.284 20:37:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.284 20:37:42 -- common/autotest_common.sh@10 -- # set +x 00:06:18.284 ************************************ 00:06:18.284 START TEST accel_negative_buffers 00:06:18.284 ************************************ 00:06:18.284 20:37:42 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:18.284 20:37:42 -- common/autotest_common.sh@638 -- # local es=0 00:06:18.284 20:37:42 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:18.284 20:37:42 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:18.545 20:37:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:18.545 20:37:42 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:18.545 20:37:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:18.545 20:37:42 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:18.545 20:37:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.545 20:37:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:18.545 20:37:42 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.545 20:37:42 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.545 20:37:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.545 20:37:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.545 20:37:42 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.545 20:37:42 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.545 20:37:42 -- accel/accel.sh@41 -- # jq -r . 00:06:18.545 -x option must be non-negative. 00:06:18.545 [2024-04-24 20:37:42.951027] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:18.545 accel_perf options: 00:06:18.545 [-h help message] 00:06:18.545 [-q queue depth per core] 00:06:18.545 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:18.545 [-T number of threads per core 00:06:18.545 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:18.545 [-t time in seconds] 00:06:18.545 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:18.545 [ dif_verify, , dif_generate, dif_generate_copy 00:06:18.545 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:18.545 [-l for compress/decompress workloads, name of uncompressed input file 00:06:18.545 [-S for crc32c workload, use this seed value (default 0) 00:06:18.545 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:18.545 [-f for fill workload, use this BYTE value (default 255) 00:06:18.545 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:18.545 [-y verify result if this switch is on] 00:06:18.545 [-a tasks to allocate per core (default: same value as -q)] 00:06:18.545 Can be used to spread operations across a wider range of memory. 00:06:18.545 20:37:42 -- common/autotest_common.sh@641 -- # es=1 00:06:18.545 20:37:42 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:18.545 20:37:42 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:18.545 20:37:42 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:18.545 00:06:18.545 real 0m0.036s 00:06:18.545 user 0m0.037s 00:06:18.545 sys 0m0.016s 00:06:18.545 20:37:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:18.545 20:37:42 -- common/autotest_common.sh@10 -- # set +x 00:06:18.545 ************************************ 00:06:18.545 END TEST accel_negative_buffers 00:06:18.545 ************************************ 00:06:18.545 20:37:42 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:18.545 20:37:42 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:18.545 20:37:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:18.545 20:37:42 -- common/autotest_common.sh@10 -- # set +x 00:06:18.545 ************************************ 00:06:18.545 START TEST accel_crc32c 00:06:18.545 ************************************ 00:06:18.545 20:37:43 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:18.545 20:37:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.545 20:37:43 -- accel/accel.sh@17 -- # local accel_module 00:06:18.545 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.545 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.545 20:37:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:18.545 20:37:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.545 20:37:43 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.545 20:37:43 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.545 20:37:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:18.545 20:37:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.545 20:37:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.545 20:37:43 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.545 20:37:43 -- accel/accel.sh@40 -- # local IFS=, 00:06:18.545 20:37:43 -- accel/accel.sh@41 -- # jq -r . 00:06:18.545 [2024-04-24 20:37:43.170155] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:18.545 [2024-04-24 20:37:43.170231] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2585993 ] 00:06:18.805 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.805 [2024-04-24 20:37:43.251954] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.805 [2024-04-24 20:37:43.328275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.805 20:37:43 -- accel/accel.sh@20 -- # val= 00:06:18.805 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.805 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.805 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.805 20:37:43 -- accel/accel.sh@20 -- # val= 00:06:18.805 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.805 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.805 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.805 20:37:43 -- accel/accel.sh@20 -- # val=0x1 00:06:18.805 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.805 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.805 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.805 20:37:43 -- accel/accel.sh@20 -- # val= 00:06:18.805 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.805 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.805 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.805 20:37:43 -- accel/accel.sh@20 -- # val= 00:06:18.805 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.805 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 20:37:43 -- accel/accel.sh@20 -- # val=crc32c 00:06:18.806 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 20:37:43 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 20:37:43 -- accel/accel.sh@20 -- # val=32 00:06:18.806 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 20:37:43 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.806 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 20:37:43 -- accel/accel.sh@20 -- # val= 00:06:18.806 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 20:37:43 -- accel/accel.sh@20 -- # val=software 00:06:18.806 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 20:37:43 -- accel/accel.sh@22 -- # accel_module=software 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 20:37:43 -- accel/accel.sh@20 -- # val=32 00:06:18.806 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 20:37:43 -- accel/accel.sh@20 -- # val=32 00:06:18.806 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 20:37:43 -- accel/accel.sh@20 -- # val=1 00:06:18.806 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 20:37:43 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.806 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 20:37:43 -- accel/accel.sh@20 -- # val=Yes 00:06:18.806 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 20:37:43 -- accel/accel.sh@20 -- # val= 00:06:18.806 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:18.806 20:37:43 -- accel/accel.sh@20 -- # val= 00:06:18.806 20:37:43 -- accel/accel.sh@21 -- # case "$var" in 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # IFS=: 00:06:18.806 20:37:43 -- accel/accel.sh@19 -- # read -r var val 00:06:20.188 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.188 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.188 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.188 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.188 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.188 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.188 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.188 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.188 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.188 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.188 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.188 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.188 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.188 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.188 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.188 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.188 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.188 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.188 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.188 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.188 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.188 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.188 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.188 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.188 20:37:44 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.189 20:37:44 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:20.189 20:37:44 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.189 00:06:20.189 real 0m1.315s 00:06:20.189 user 0m1.196s 00:06:20.189 sys 0m0.129s 00:06:20.189 20:37:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:20.189 20:37:44 -- common/autotest_common.sh@10 -- # set +x 00:06:20.189 ************************************ 00:06:20.189 END TEST accel_crc32c 00:06:20.189 ************************************ 00:06:20.189 20:37:44 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:20.189 20:37:44 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:20.189 20:37:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:20.189 20:37:44 -- common/autotest_common.sh@10 -- # set +x 00:06:20.189 ************************************ 00:06:20.189 START TEST accel_crc32c_C2 00:06:20.189 ************************************ 00:06:20.189 20:37:44 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:20.189 20:37:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.189 20:37:44 -- accel/accel.sh@17 -- # local accel_module 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:20.189 20:37:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:20.189 20:37:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.189 20:37:44 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.189 20:37:44 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.189 20:37:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.189 20:37:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.189 20:37:44 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.189 20:37:44 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.189 20:37:44 -- accel/accel.sh@41 -- # jq -r . 00:06:20.189 [2024-04-24 20:37:44.640886] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:20.189 [2024-04-24 20:37:44.640972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586277 ] 00:06:20.189 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.189 [2024-04-24 20:37:44.704950] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.189 [2024-04-24 20:37:44.777838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val=0x1 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val=crc32c 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val=0 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val=software 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@22 -- # accel_module=software 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val=32 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val=32 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val=1 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val=Yes 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:20.189 20:37:44 -- accel/accel.sh@20 -- # val= 00:06:20.189 20:37:44 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # IFS=: 00:06:20.189 20:37:44 -- accel/accel.sh@19 -- # read -r var val 00:06:21.573 20:37:45 -- accel/accel.sh@20 -- # val= 00:06:21.573 20:37:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.573 20:37:45 -- accel/accel.sh@19 -- # IFS=: 00:06:21.573 20:37:45 -- accel/accel.sh@19 -- # read -r var val 00:06:21.573 20:37:45 -- accel/accel.sh@20 -- # val= 00:06:21.573 20:37:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.573 20:37:45 -- accel/accel.sh@19 -- # IFS=: 00:06:21.573 20:37:45 -- accel/accel.sh@19 -- # read -r var val 00:06:21.573 20:37:45 -- accel/accel.sh@20 -- # val= 00:06:21.573 20:37:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.573 20:37:45 -- accel/accel.sh@19 -- # IFS=: 00:06:21.573 20:37:45 -- accel/accel.sh@19 -- # read -r var val 00:06:21.573 20:37:45 -- accel/accel.sh@20 -- # val= 00:06:21.573 20:37:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.573 20:37:45 -- accel/accel.sh@19 -- # IFS=: 00:06:21.573 20:37:45 -- accel/accel.sh@19 -- # read -r var val 00:06:21.573 20:37:45 -- accel/accel.sh@20 -- # val= 00:06:21.573 20:37:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.573 20:37:45 -- accel/accel.sh@19 -- # IFS=: 00:06:21.573 20:37:45 -- accel/accel.sh@19 -- # read -r var val 00:06:21.573 20:37:45 -- accel/accel.sh@20 -- # val= 00:06:21.573 20:37:45 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.573 20:37:45 -- accel/accel.sh@19 -- # IFS=: 00:06:21.573 20:37:45 -- accel/accel.sh@19 -- # read -r var val 00:06:21.573 20:37:45 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.573 20:37:45 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:21.573 20:37:45 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.573 00:06:21.573 real 0m1.294s 00:06:21.573 user 0m1.195s 00:06:21.573 sys 0m0.111s 00:06:21.573 20:37:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.573 20:37:45 -- common/autotest_common.sh@10 -- # set +x 00:06:21.573 ************************************ 00:06:21.573 END TEST accel_crc32c_C2 00:06:21.573 ************************************ 00:06:21.573 20:37:45 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:21.573 20:37:45 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:21.573 20:37:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.573 20:37:45 -- common/autotest_common.sh@10 -- # set +x 00:06:21.573 ************************************ 00:06:21.573 START TEST accel_copy 00:06:21.573 ************************************ 00:06:21.573 20:37:46 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:21.573 20:37:46 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.573 20:37:46 -- accel/accel.sh@17 -- # local accel_module 00:06:21.573 20:37:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:21.573 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.573 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.573 20:37:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:21.573 20:37:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.573 20:37:46 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.573 20:37:46 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.573 20:37:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.573 20:37:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.573 20:37:46 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.573 20:37:46 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.573 20:37:46 -- accel/accel.sh@41 -- # jq -r . 00:06:21.573 [2024-04-24 20:37:46.095848] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:21.573 [2024-04-24 20:37:46.095895] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586510 ] 00:06:21.573 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.573 [2024-04-24 20:37:46.167144] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.834 [2024-04-24 20:37:46.241652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.834 20:37:46 -- accel/accel.sh@20 -- # val= 00:06:21.834 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.834 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.834 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.834 20:37:46 -- accel/accel.sh@20 -- # val= 00:06:21.834 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.834 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.834 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.834 20:37:46 -- accel/accel.sh@20 -- # val=0x1 00:06:21.834 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.834 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.834 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.834 20:37:46 -- accel/accel.sh@20 -- # val= 00:06:21.834 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.834 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.834 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.834 20:37:46 -- accel/accel.sh@20 -- # val= 00:06:21.834 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.834 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.834 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.834 20:37:46 -- accel/accel.sh@20 -- # val=copy 00:06:21.834 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.834 20:37:46 -- accel/accel.sh@23 -- # accel_opc=copy 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.835 20:37:46 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.835 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.835 20:37:46 -- accel/accel.sh@20 -- # val= 00:06:21.835 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.835 20:37:46 -- accel/accel.sh@20 -- # val=software 00:06:21.835 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.835 20:37:46 -- accel/accel.sh@22 -- # accel_module=software 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.835 20:37:46 -- accel/accel.sh@20 -- # val=32 00:06:21.835 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.835 20:37:46 -- accel/accel.sh@20 -- # val=32 00:06:21.835 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.835 20:37:46 -- accel/accel.sh@20 -- # val=1 00:06:21.835 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.835 20:37:46 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.835 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.835 20:37:46 -- accel/accel.sh@20 -- # val=Yes 00:06:21.835 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.835 20:37:46 -- accel/accel.sh@20 -- # val= 00:06:21.835 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:21.835 20:37:46 -- accel/accel.sh@20 -- # val= 00:06:21.835 20:37:46 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # IFS=: 00:06:21.835 20:37:46 -- accel/accel.sh@19 -- # read -r var val 00:06:22.776 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:22.776 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.776 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:22.776 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:22.776 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:22.776 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.776 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:22.776 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:22.776 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:22.776 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.776 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:22.776 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:22.776 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:22.776 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.776 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:22.776 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:22.776 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:22.776 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.776 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:22.776 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:22.776 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:22.776 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:22.776 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:22.776 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:22.776 20:37:47 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.776 20:37:47 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:22.776 20:37:47 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.776 00:06:22.776 real 0m1.287s 00:06:22.776 user 0m1.188s 00:06:22.776 sys 0m0.111s 00:06:22.776 20:37:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:22.776 20:37:47 -- common/autotest_common.sh@10 -- # set +x 00:06:22.776 ************************************ 00:06:22.776 END TEST accel_copy 00:06:22.776 ************************************ 00:06:22.776 20:37:47 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:22.776 20:37:47 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:22.776 20:37:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:22.776 20:37:47 -- common/autotest_common.sh@10 -- # set +x 00:06:23.037 ************************************ 00:06:23.037 START TEST accel_fill 00:06:23.037 ************************************ 00:06:23.037 20:37:47 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.037 20:37:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:23.037 20:37:47 -- accel/accel.sh@17 -- # local accel_module 00:06:23.037 20:37:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.037 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.037 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.037 20:37:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:23.037 20:37:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.037 20:37:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:23.037 20:37:47 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:23.037 20:37:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.037 20:37:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.037 20:37:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:23.037 20:37:47 -- accel/accel.sh@40 -- # local IFS=, 00:06:23.037 20:37:47 -- accel/accel.sh@41 -- # jq -r . 00:06:23.037 [2024-04-24 20:37:47.545085] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:23.037 [2024-04-24 20:37:47.545120] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586765 ] 00:06:23.037 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.037 [2024-04-24 20:37:47.614450] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.297 [2024-04-24 20:37:47.680125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.297 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:23.297 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.297 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.297 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.297 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:23.297 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.297 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.297 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.297 20:37:47 -- accel/accel.sh@20 -- # val=0x1 00:06:23.297 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.297 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.297 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.297 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val=fill 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@23 -- # accel_opc=fill 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val=0x80 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val=software 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@22 -- # accel_module=software 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val=64 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val=64 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val=1 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val=Yes 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:23.298 20:37:47 -- accel/accel.sh@20 -- # val= 00:06:23.298 20:37:47 -- accel/accel.sh@21 -- # case "$var" in 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # IFS=: 00:06:23.298 20:37:47 -- accel/accel.sh@19 -- # read -r var val 00:06:24.238 20:37:48 -- accel/accel.sh@20 -- # val= 00:06:24.238 20:37:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.238 20:37:48 -- accel/accel.sh@19 -- # IFS=: 00:06:24.238 20:37:48 -- accel/accel.sh@19 -- # read -r var val 00:06:24.238 20:37:48 -- accel/accel.sh@20 -- # val= 00:06:24.238 20:37:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.238 20:37:48 -- accel/accel.sh@19 -- # IFS=: 00:06:24.238 20:37:48 -- accel/accel.sh@19 -- # read -r var val 00:06:24.238 20:37:48 -- accel/accel.sh@20 -- # val= 00:06:24.238 20:37:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.238 20:37:48 -- accel/accel.sh@19 -- # IFS=: 00:06:24.238 20:37:48 -- accel/accel.sh@19 -- # read -r var val 00:06:24.238 20:37:48 -- accel/accel.sh@20 -- # val= 00:06:24.238 20:37:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.238 20:37:48 -- accel/accel.sh@19 -- # IFS=: 00:06:24.238 20:37:48 -- accel/accel.sh@19 -- # read -r var val 00:06:24.238 20:37:48 -- accel/accel.sh@20 -- # val= 00:06:24.238 20:37:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.238 20:37:48 -- accel/accel.sh@19 -- # IFS=: 00:06:24.238 20:37:48 -- accel/accel.sh@19 -- # read -r var val 00:06:24.238 20:37:48 -- accel/accel.sh@20 -- # val= 00:06:24.238 20:37:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.238 20:37:48 -- accel/accel.sh@19 -- # IFS=: 00:06:24.238 20:37:48 -- accel/accel.sh@19 -- # read -r var val 00:06:24.238 20:37:48 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.238 20:37:48 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:24.238 20:37:48 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.238 00:06:24.238 real 0m1.275s 00:06:24.238 user 0m1.185s 00:06:24.238 sys 0m0.102s 00:06:24.238 20:37:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.238 20:37:48 -- common/autotest_common.sh@10 -- # set +x 00:06:24.238 ************************************ 00:06:24.238 END TEST accel_fill 00:06:24.238 ************************************ 00:06:24.238 20:37:48 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:24.238 20:37:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:24.238 20:37:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.238 20:37:48 -- common/autotest_common.sh@10 -- # set +x 00:06:24.499 ************************************ 00:06:24.499 START TEST accel_copy_crc32c 00:06:24.499 ************************************ 00:06:24.499 20:37:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:06:24.499 20:37:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.499 20:37:48 -- accel/accel.sh@17 -- # local accel_module 00:06:24.499 20:37:48 -- accel/accel.sh@19 -- # IFS=: 00:06:24.499 20:37:48 -- accel/accel.sh@19 -- # read -r var val 00:06:24.499 20:37:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:24.499 20:37:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:24.499 20:37:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.499 20:37:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.499 20:37:48 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.499 20:37:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.499 20:37:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.499 20:37:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.499 20:37:48 -- accel/accel.sh@40 -- # local IFS=, 00:06:24.499 20:37:48 -- accel/accel.sh@41 -- # jq -r . 00:06:24.499 [2024-04-24 20:37:48.994462] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:24.499 [2024-04-24 20:37:48.994522] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2587112 ] 00:06:24.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.499 [2024-04-24 20:37:49.072974] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.759 [2024-04-24 20:37:49.150367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val= 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val= 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val=0x1 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val= 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val= 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val=0 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val= 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val=software 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@22 -- # accel_module=software 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val=32 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val=32 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val=1 00:06:24.759 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.759 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.759 20:37:49 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.760 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.760 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.760 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.760 20:37:49 -- accel/accel.sh@20 -- # val=Yes 00:06:24.760 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.760 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.760 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.760 20:37:49 -- accel/accel.sh@20 -- # val= 00:06:24.760 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.760 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.760 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:24.760 20:37:49 -- accel/accel.sh@20 -- # val= 00:06:24.760 20:37:49 -- accel/accel.sh@21 -- # case "$var" in 00:06:24.760 20:37:49 -- accel/accel.sh@19 -- # IFS=: 00:06:24.760 20:37:49 -- accel/accel.sh@19 -- # read -r var val 00:06:25.724 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:25.724 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.724 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:25.724 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:25.724 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:25.724 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.724 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:25.724 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:25.724 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:25.724 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.724 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:25.724 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:25.724 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:25.724 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.724 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:25.724 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:25.724 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:25.724 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.724 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:25.724 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:25.724 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:25.724 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:25.724 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:25.724 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:25.724 20:37:50 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.724 20:37:50 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:25.724 20:37:50 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.724 00:06:25.724 real 0m1.315s 00:06:25.724 user 0m1.204s 00:06:25.724 sys 0m0.121s 00:06:25.724 20:37:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:25.724 20:37:50 -- common/autotest_common.sh@10 -- # set +x 00:06:25.724 ************************************ 00:06:25.724 END TEST accel_copy_crc32c 00:06:25.724 ************************************ 00:06:25.724 20:37:50 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:25.724 20:37:50 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:25.724 20:37:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.724 20:37:50 -- common/autotest_common.sh@10 -- # set +x 00:06:25.984 ************************************ 00:06:25.984 START TEST accel_copy_crc32c_C2 00:06:25.984 ************************************ 00:06:25.984 20:37:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:25.984 20:37:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.984 20:37:50 -- accel/accel.sh@17 -- # local accel_module 00:06:25.984 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:25.984 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:25.984 20:37:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:25.984 20:37:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:25.984 20:37:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.984 20:37:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.984 20:37:50 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.984 20:37:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.984 20:37:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.984 20:37:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.984 20:37:50 -- accel/accel.sh@40 -- # local IFS=, 00:06:25.984 20:37:50 -- accel/accel.sh@41 -- # jq -r . 00:06:25.984 [2024-04-24 20:37:50.471649] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:25.984 [2024-04-24 20:37:50.471714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2587475 ] 00:06:25.984 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.984 [2024-04-24 20:37:50.549008] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.984 [2024-04-24 20:37:50.613497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val=0x1 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val=0 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val=software 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@22 -- # accel_module=software 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val=32 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val=32 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val=1 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val=Yes 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:26.244 20:37:50 -- accel/accel.sh@20 -- # val= 00:06:26.244 20:37:50 -- accel/accel.sh@21 -- # case "$var" in 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # IFS=: 00:06:26.244 20:37:50 -- accel/accel.sh@19 -- # read -r var val 00:06:27.185 20:37:51 -- accel/accel.sh@20 -- # val= 00:06:27.185 20:37:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.185 20:37:51 -- accel/accel.sh@19 -- # IFS=: 00:06:27.185 20:37:51 -- accel/accel.sh@19 -- # read -r var val 00:06:27.185 20:37:51 -- accel/accel.sh@20 -- # val= 00:06:27.185 20:37:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.185 20:37:51 -- accel/accel.sh@19 -- # IFS=: 00:06:27.185 20:37:51 -- accel/accel.sh@19 -- # read -r var val 00:06:27.185 20:37:51 -- accel/accel.sh@20 -- # val= 00:06:27.185 20:37:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.185 20:37:51 -- accel/accel.sh@19 -- # IFS=: 00:06:27.185 20:37:51 -- accel/accel.sh@19 -- # read -r var val 00:06:27.185 20:37:51 -- accel/accel.sh@20 -- # val= 00:06:27.185 20:37:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.185 20:37:51 -- accel/accel.sh@19 -- # IFS=: 00:06:27.185 20:37:51 -- accel/accel.sh@19 -- # read -r var val 00:06:27.185 20:37:51 -- accel/accel.sh@20 -- # val= 00:06:27.185 20:37:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.185 20:37:51 -- accel/accel.sh@19 -- # IFS=: 00:06:27.185 20:37:51 -- accel/accel.sh@19 -- # read -r var val 00:06:27.185 20:37:51 -- accel/accel.sh@20 -- # val= 00:06:27.185 20:37:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.185 20:37:51 -- accel/accel.sh@19 -- # IFS=: 00:06:27.185 20:37:51 -- accel/accel.sh@19 -- # read -r var val 00:06:27.185 20:37:51 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.185 20:37:51 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:27.185 20:37:51 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.185 00:06:27.185 real 0m1.302s 00:06:27.185 user 0m1.203s 00:06:27.185 sys 0m0.110s 00:06:27.185 20:37:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.185 20:37:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.185 ************************************ 00:06:27.185 END TEST accel_copy_crc32c_C2 00:06:27.185 ************************************ 00:06:27.185 20:37:51 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:27.185 20:37:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:27.185 20:37:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.185 20:37:51 -- common/autotest_common.sh@10 -- # set +x 00:06:27.446 ************************************ 00:06:27.446 START TEST accel_dualcast 00:06:27.446 ************************************ 00:06:27.446 20:37:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:06:27.446 20:37:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:27.446 20:37:51 -- accel/accel.sh@17 -- # local accel_module 00:06:27.446 20:37:51 -- accel/accel.sh@19 -- # IFS=: 00:06:27.446 20:37:51 -- accel/accel.sh@19 -- # read -r var val 00:06:27.446 20:37:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:27.446 20:37:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:27.446 20:37:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.446 20:37:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.446 20:37:51 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.446 20:37:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.446 20:37:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.446 20:37:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.446 20:37:51 -- accel/accel.sh@40 -- # local IFS=, 00:06:27.446 20:37:51 -- accel/accel.sh@41 -- # jq -r . 00:06:27.446 [2024-04-24 20:37:51.950190] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:27.446 [2024-04-24 20:37:51.950288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2587830 ] 00:06:27.446 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.446 [2024-04-24 20:37:52.015633] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.707 [2024-04-24 20:37:52.089118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val= 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val= 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val=0x1 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val= 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val= 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val=dualcast 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val= 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val=software 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@22 -- # accel_module=software 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val=32 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val=32 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val=1 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val=Yes 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val= 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:27.707 20:37:52 -- accel/accel.sh@20 -- # val= 00:06:27.707 20:37:52 -- accel/accel.sh@21 -- # case "$var" in 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # IFS=: 00:06:27.707 20:37:52 -- accel/accel.sh@19 -- # read -r var val 00:06:28.661 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:28.661 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.661 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:28.662 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:28.662 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:28.662 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.662 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:28.662 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:28.662 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:28.662 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.662 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:28.662 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:28.662 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:28.662 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.662 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:28.662 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:28.662 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:28.662 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.662 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:28.662 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:28.662 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:28.662 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:28.662 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:28.662 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:28.662 20:37:53 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.662 20:37:53 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:28.662 20:37:53 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.662 00:06:28.662 real 0m1.298s 00:06:28.662 user 0m1.203s 00:06:28.662 sys 0m0.105s 00:06:28.662 20:37:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.662 20:37:53 -- common/autotest_common.sh@10 -- # set +x 00:06:28.662 ************************************ 00:06:28.662 END TEST accel_dualcast 00:06:28.662 ************************************ 00:06:28.662 20:37:53 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:28.662 20:37:53 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:28.662 20:37:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.662 20:37:53 -- common/autotest_common.sh@10 -- # set +x 00:06:28.926 ************************************ 00:06:28.926 START TEST accel_compare 00:06:28.926 ************************************ 00:06:28.926 20:37:53 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:06:28.926 20:37:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.926 20:37:53 -- accel/accel.sh@17 -- # local accel_module 00:06:28.926 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:28.926 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:28.926 20:37:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:28.926 20:37:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:28.926 20:37:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.926 20:37:53 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.926 20:37:53 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.926 20:37:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.926 20:37:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.926 20:37:53 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.926 20:37:53 -- accel/accel.sh@40 -- # local IFS=, 00:06:28.926 20:37:53 -- accel/accel.sh@41 -- # jq -r . 00:06:28.926 [2024-04-24 20:37:53.426801] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:28.927 [2024-04-24 20:37:53.426889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588189 ] 00:06:28.927 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.927 [2024-04-24 20:37:53.507540] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.187 [2024-04-24 20:37:53.571156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val=0x1 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val=compare 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@23 -- # accel_opc=compare 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val=software 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@22 -- # accel_module=software 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val=32 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val=32 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val=1 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val=Yes 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.187 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:29.187 20:37:53 -- accel/accel.sh@20 -- # val= 00:06:29.187 20:37:53 -- accel/accel.sh@21 -- # case "$var" in 00:06:29.188 20:37:53 -- accel/accel.sh@19 -- # IFS=: 00:06:29.188 20:37:53 -- accel/accel.sh@19 -- # read -r var val 00:06:30.128 20:37:54 -- accel/accel.sh@20 -- # val= 00:06:30.128 20:37:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.128 20:37:54 -- accel/accel.sh@19 -- # IFS=: 00:06:30.128 20:37:54 -- accel/accel.sh@19 -- # read -r var val 00:06:30.128 20:37:54 -- accel/accel.sh@20 -- # val= 00:06:30.128 20:37:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.128 20:37:54 -- accel/accel.sh@19 -- # IFS=: 00:06:30.128 20:37:54 -- accel/accel.sh@19 -- # read -r var val 00:06:30.128 20:37:54 -- accel/accel.sh@20 -- # val= 00:06:30.128 20:37:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.128 20:37:54 -- accel/accel.sh@19 -- # IFS=: 00:06:30.128 20:37:54 -- accel/accel.sh@19 -- # read -r var val 00:06:30.128 20:37:54 -- accel/accel.sh@20 -- # val= 00:06:30.128 20:37:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.128 20:37:54 -- accel/accel.sh@19 -- # IFS=: 00:06:30.128 20:37:54 -- accel/accel.sh@19 -- # read -r var val 00:06:30.128 20:37:54 -- accel/accel.sh@20 -- # val= 00:06:30.128 20:37:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.128 20:37:54 -- accel/accel.sh@19 -- # IFS=: 00:06:30.128 20:37:54 -- accel/accel.sh@19 -- # read -r var val 00:06:30.128 20:37:54 -- accel/accel.sh@20 -- # val= 00:06:30.128 20:37:54 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.128 20:37:54 -- accel/accel.sh@19 -- # IFS=: 00:06:30.128 20:37:54 -- accel/accel.sh@19 -- # read -r var val 00:06:30.128 20:37:54 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.128 20:37:54 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:30.128 20:37:54 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.128 00:06:30.128 real 0m1.302s 00:06:30.128 user 0m1.193s 00:06:30.128 sys 0m0.119s 00:06:30.128 20:37:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.128 20:37:54 -- common/autotest_common.sh@10 -- # set +x 00:06:30.128 ************************************ 00:06:30.128 END TEST accel_compare 00:06:30.128 ************************************ 00:06:30.128 20:37:54 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:30.128 20:37:54 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:30.128 20:37:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.128 20:37:54 -- common/autotest_common.sh@10 -- # set +x 00:06:30.390 ************************************ 00:06:30.390 START TEST accel_xor 00:06:30.390 ************************************ 00:06:30.390 20:37:54 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:06:30.390 20:37:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.390 20:37:54 -- accel/accel.sh@17 -- # local accel_module 00:06:30.390 20:37:54 -- accel/accel.sh@19 -- # IFS=: 00:06:30.390 20:37:54 -- accel/accel.sh@19 -- # read -r var val 00:06:30.390 20:37:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:30.390 20:37:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:30.390 20:37:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.390 20:37:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.390 20:37:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.390 20:37:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.390 20:37:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.390 20:37:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.390 20:37:54 -- accel/accel.sh@40 -- # local IFS=, 00:06:30.390 20:37:54 -- accel/accel.sh@41 -- # jq -r . 00:06:30.390 [2024-04-24 20:37:54.906541] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:30.390 [2024-04-24 20:37:54.906618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588478 ] 00:06:30.390 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.390 [2024-04-24 20:37:54.986568] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.650 [2024-04-24 20:37:55.064342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val= 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val= 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val=0x1 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val= 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val= 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val=xor 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val=2 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val= 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val=software 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@22 -- # accel_module=software 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val=32 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val=32 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val=1 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val=Yes 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val= 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:30.650 20:37:55 -- accel/accel.sh@20 -- # val= 00:06:30.650 20:37:55 -- accel/accel.sh@21 -- # case "$var" in 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # IFS=: 00:06:30.650 20:37:55 -- accel/accel.sh@19 -- # read -r var val 00:06:31.596 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:31.596 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.596 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:31.596 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:31.596 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:31.596 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.596 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:31.596 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:31.596 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:31.596 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.596 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:31.596 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:31.596 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:31.596 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.596 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:31.596 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:31.596 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:31.596 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.596 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:31.596 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:31.596 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:31.596 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:31.596 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:31.596 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:31.596 20:37:56 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.596 20:37:56 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:31.596 20:37:56 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.596 00:06:31.596 real 0m1.315s 00:06:31.596 user 0m1.206s 00:06:31.596 sys 0m0.120s 00:06:31.596 20:37:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.596 20:37:56 -- common/autotest_common.sh@10 -- # set +x 00:06:31.596 ************************************ 00:06:31.596 END TEST accel_xor 00:06:31.596 ************************************ 00:06:31.596 20:37:56 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:31.596 20:37:56 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:31.596 20:37:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.596 20:37:56 -- common/autotest_common.sh@10 -- # set +x 00:06:31.857 ************************************ 00:06:31.857 START TEST accel_xor 00:06:31.857 ************************************ 00:06:31.857 20:37:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:06:31.857 20:37:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:31.857 20:37:56 -- accel/accel.sh@17 -- # local accel_module 00:06:31.857 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:31.857 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:31.857 20:37:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:31.857 20:37:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:31.857 20:37:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.857 20:37:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.857 20:37:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.857 20:37:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.857 20:37:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.857 20:37:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.857 20:37:56 -- accel/accel.sh@40 -- # local IFS=, 00:06:31.857 20:37:56 -- accel/accel.sh@41 -- # jq -r . 00:06:31.857 [2024-04-24 20:37:56.400674] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:31.857 [2024-04-24 20:37:56.400755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588737 ] 00:06:31.857 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.857 [2024-04-24 20:37:56.481439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.117 [2024-04-24 20:37:56.559593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.117 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:32.117 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:32.117 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 20:37:56 -- accel/accel.sh@20 -- # val=0x1 00:06:32.117 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:32.117 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:32.117 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 20:37:56 -- accel/accel.sh@20 -- # val=xor 00:06:32.117 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 20:37:56 -- accel/accel.sh@23 -- # accel_opc=xor 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 20:37:56 -- accel/accel.sh@20 -- # val=3 00:06:32.117 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 20:37:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:32.117 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:32.117 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.117 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.117 20:37:56 -- accel/accel.sh@20 -- # val=software 00:06:32.118 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.118 20:37:56 -- accel/accel.sh@22 -- # accel_module=software 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.118 20:37:56 -- accel/accel.sh@20 -- # val=32 00:06:32.118 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.118 20:37:56 -- accel/accel.sh@20 -- # val=32 00:06:32.118 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.118 20:37:56 -- accel/accel.sh@20 -- # val=1 00:06:32.118 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.118 20:37:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:32.118 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.118 20:37:56 -- accel/accel.sh@20 -- # val=Yes 00:06:32.118 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.118 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:32.118 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:32.118 20:37:56 -- accel/accel.sh@20 -- # val= 00:06:32.118 20:37:56 -- accel/accel.sh@21 -- # case "$var" in 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # IFS=: 00:06:32.118 20:37:56 -- accel/accel.sh@19 -- # read -r var val 00:06:33.058 20:37:57 -- accel/accel.sh@20 -- # val= 00:06:33.058 20:37:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.058 20:37:57 -- accel/accel.sh@19 -- # IFS=: 00:06:33.058 20:37:57 -- accel/accel.sh@19 -- # read -r var val 00:06:33.058 20:37:57 -- accel/accel.sh@20 -- # val= 00:06:33.058 20:37:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.058 20:37:57 -- accel/accel.sh@19 -- # IFS=: 00:06:33.058 20:37:57 -- accel/accel.sh@19 -- # read -r var val 00:06:33.058 20:37:57 -- accel/accel.sh@20 -- # val= 00:06:33.058 20:37:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.058 20:37:57 -- accel/accel.sh@19 -- # IFS=: 00:06:33.058 20:37:57 -- accel/accel.sh@19 -- # read -r var val 00:06:33.058 20:37:57 -- accel/accel.sh@20 -- # val= 00:06:33.058 20:37:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.058 20:37:57 -- accel/accel.sh@19 -- # IFS=: 00:06:33.058 20:37:57 -- accel/accel.sh@19 -- # read -r var val 00:06:33.058 20:37:57 -- accel/accel.sh@20 -- # val= 00:06:33.058 20:37:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.058 20:37:57 -- accel/accel.sh@19 -- # IFS=: 00:06:33.058 20:37:57 -- accel/accel.sh@19 -- # read -r var val 00:06:33.058 20:37:57 -- accel/accel.sh@20 -- # val= 00:06:33.058 20:37:57 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.058 20:37:57 -- accel/accel.sh@19 -- # IFS=: 00:06:33.058 20:37:57 -- accel/accel.sh@19 -- # read -r var val 00:06:33.058 20:37:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:33.058 20:37:57 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:33.058 20:37:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.058 00:06:33.058 real 0m1.317s 00:06:33.058 user 0m1.202s 00:06:33.058 sys 0m0.127s 00:06:33.058 20:37:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:33.058 20:37:57 -- common/autotest_common.sh@10 -- # set +x 00:06:33.058 ************************************ 00:06:33.058 END TEST accel_xor 00:06:33.058 ************************************ 00:06:33.319 20:37:57 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:33.319 20:37:57 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:33.319 20:37:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.319 20:37:57 -- common/autotest_common.sh@10 -- # set +x 00:06:33.319 ************************************ 00:06:33.319 START TEST accel_dif_verify 00:06:33.319 ************************************ 00:06:33.319 20:37:57 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:06:33.319 20:37:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.319 20:37:57 -- accel/accel.sh@17 -- # local accel_module 00:06:33.319 20:37:57 -- accel/accel.sh@19 -- # IFS=: 00:06:33.319 20:37:57 -- accel/accel.sh@19 -- # read -r var val 00:06:33.319 20:37:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:33.319 20:37:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:33.319 20:37:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.319 20:37:57 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.319 20:37:57 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.319 20:37:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.319 20:37:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.319 20:37:57 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.319 20:37:57 -- accel/accel.sh@40 -- # local IFS=, 00:06:33.319 20:37:57 -- accel/accel.sh@41 -- # jq -r . 00:06:33.319 [2024-04-24 20:37:57.894662] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:33.319 [2024-04-24 20:37:57.894738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588978 ] 00:06:33.319 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.580 [2024-04-24 20:37:57.976789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.580 [2024-04-24 20:37:58.055388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val= 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val= 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val=0x1 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val= 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val= 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val=dif_verify 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val= 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val=software 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@22 -- # accel_module=software 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val=32 00:06:33.580 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.580 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.580 20:37:58 -- accel/accel.sh@20 -- # val=32 00:06:33.581 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.581 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.581 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.581 20:37:58 -- accel/accel.sh@20 -- # val=1 00:06:33.581 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.581 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.581 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.581 20:37:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.581 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.581 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.581 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.581 20:37:58 -- accel/accel.sh@20 -- # val=No 00:06:33.581 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.581 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.581 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.581 20:37:58 -- accel/accel.sh@20 -- # val= 00:06:33.581 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.581 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.581 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:33.581 20:37:58 -- accel/accel.sh@20 -- # val= 00:06:33.581 20:37:58 -- accel/accel.sh@21 -- # case "$var" in 00:06:33.581 20:37:58 -- accel/accel.sh@19 -- # IFS=: 00:06:33.581 20:37:58 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.965 20:37:59 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:34.965 20:37:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.965 00:06:34.965 real 0m1.320s 00:06:34.965 user 0m1.204s 00:06:34.965 sys 0m0.127s 00:06:34.965 20:37:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.965 20:37:59 -- common/autotest_common.sh@10 -- # set +x 00:06:34.965 ************************************ 00:06:34.965 END TEST accel_dif_verify 00:06:34.965 ************************************ 00:06:34.965 20:37:59 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:34.965 20:37:59 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:34.965 20:37:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.965 20:37:59 -- common/autotest_common.sh@10 -- # set +x 00:06:34.965 ************************************ 00:06:34.965 START TEST accel_dif_generate 00:06:34.965 ************************************ 00:06:34.965 20:37:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:06:34.965 20:37:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:34.965 20:37:59 -- accel/accel.sh@17 -- # local accel_module 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:34.965 20:37:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:34.965 20:37:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.965 20:37:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.965 20:37:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.965 20:37:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.965 20:37:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.965 20:37:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.965 20:37:59 -- accel/accel.sh@40 -- # local IFS=, 00:06:34.965 20:37:59 -- accel/accel.sh@41 -- # jq -r . 00:06:34.965 [2024-04-24 20:37:59.375073] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:34.965 [2024-04-24 20:37:59.375164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2589309 ] 00:06:34.965 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.965 [2024-04-24 20:37:59.463088] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.965 [2024-04-24 20:37:59.539758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val=0x1 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val=dif_generate 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val='512 bytes' 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val='8 bytes' 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val=software 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@22 -- # accel_module=software 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.965 20:37:59 -- accel/accel.sh@20 -- # val=32 00:06:34.965 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.965 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.966 20:37:59 -- accel/accel.sh@20 -- # val=32 00:06:34.966 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.966 20:37:59 -- accel/accel.sh@20 -- # val=1 00:06:34.966 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.966 20:37:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.966 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.966 20:37:59 -- accel/accel.sh@20 -- # val=No 00:06:34.966 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.966 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.966 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:34.966 20:37:59 -- accel/accel.sh@20 -- # val= 00:06:34.966 20:37:59 -- accel/accel.sh@21 -- # case "$var" in 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # IFS=: 00:06:34.966 20:37:59 -- accel/accel.sh@19 -- # read -r var val 00:06:36.349 20:38:00 -- accel/accel.sh@20 -- # val= 00:06:36.349 20:38:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.349 20:38:00 -- accel/accel.sh@19 -- # IFS=: 00:06:36.349 20:38:00 -- accel/accel.sh@19 -- # read -r var val 00:06:36.349 20:38:00 -- accel/accel.sh@20 -- # val= 00:06:36.349 20:38:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.349 20:38:00 -- accel/accel.sh@19 -- # IFS=: 00:06:36.349 20:38:00 -- accel/accel.sh@19 -- # read -r var val 00:06:36.349 20:38:00 -- accel/accel.sh@20 -- # val= 00:06:36.349 20:38:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.349 20:38:00 -- accel/accel.sh@19 -- # IFS=: 00:06:36.349 20:38:00 -- accel/accel.sh@19 -- # read -r var val 00:06:36.349 20:38:00 -- accel/accel.sh@20 -- # val= 00:06:36.349 20:38:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.349 20:38:00 -- accel/accel.sh@19 -- # IFS=: 00:06:36.349 20:38:00 -- accel/accel.sh@19 -- # read -r var val 00:06:36.349 20:38:00 -- accel/accel.sh@20 -- # val= 00:06:36.349 20:38:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.349 20:38:00 -- accel/accel.sh@19 -- # IFS=: 00:06:36.349 20:38:00 -- accel/accel.sh@19 -- # read -r var val 00:06:36.349 20:38:00 -- accel/accel.sh@20 -- # val= 00:06:36.349 20:38:00 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.349 20:38:00 -- accel/accel.sh@19 -- # IFS=: 00:06:36.349 20:38:00 -- accel/accel.sh@19 -- # read -r var val 00:06:36.349 20:38:00 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:36.349 20:38:00 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:36.349 20:38:00 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.349 00:06:36.349 real 0m1.325s 00:06:36.349 user 0m1.214s 00:06:36.349 sys 0m0.122s 00:06:36.349 20:38:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:36.349 20:38:00 -- common/autotest_common.sh@10 -- # set +x 00:06:36.349 ************************************ 00:06:36.349 END TEST accel_dif_generate 00:06:36.349 ************************************ 00:06:36.349 20:38:00 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:36.349 20:38:00 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:36.349 20:38:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.349 20:38:00 -- common/autotest_common.sh@10 -- # set +x 00:06:36.349 ************************************ 00:06:36.349 START TEST accel_dif_generate_copy 00:06:36.349 ************************************ 00:06:36.350 20:38:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:06:36.350 20:38:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:36.350 20:38:00 -- accel/accel.sh@17 -- # local accel_module 00:06:36.350 20:38:00 -- accel/accel.sh@19 -- # IFS=: 00:06:36.350 20:38:00 -- accel/accel.sh@19 -- # read -r var val 00:06:36.350 20:38:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:36.350 20:38:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:36.350 20:38:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:36.350 20:38:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.350 20:38:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.350 20:38:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.350 20:38:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.350 20:38:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.350 20:38:00 -- accel/accel.sh@40 -- # local IFS=, 00:06:36.350 20:38:00 -- accel/accel.sh@41 -- # jq -r . 00:06:36.350 [2024-04-24 20:38:00.885513] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:36.350 [2024-04-24 20:38:00.885574] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2589669 ] 00:06:36.350 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.350 [2024-04-24 20:38:00.966545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.611 [2024-04-24 20:38:01.032641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.611 20:38:01 -- accel/accel.sh@20 -- # val= 00:06:36.611 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.611 20:38:01 -- accel/accel.sh@20 -- # val= 00:06:36.611 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.611 20:38:01 -- accel/accel.sh@20 -- # val=0x1 00:06:36.611 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.611 20:38:01 -- accel/accel.sh@20 -- # val= 00:06:36.611 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.611 20:38:01 -- accel/accel.sh@20 -- # val= 00:06:36.611 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.611 20:38:01 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:36.611 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.611 20:38:01 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.611 20:38:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.611 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.611 20:38:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.611 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.611 20:38:01 -- accel/accel.sh@20 -- # val= 00:06:36.611 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.611 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.612 20:38:01 -- accel/accel.sh@20 -- # val=software 00:06:36.612 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.612 20:38:01 -- accel/accel.sh@22 -- # accel_module=software 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.612 20:38:01 -- accel/accel.sh@20 -- # val=32 00:06:36.612 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.612 20:38:01 -- accel/accel.sh@20 -- # val=32 00:06:36.612 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.612 20:38:01 -- accel/accel.sh@20 -- # val=1 00:06:36.612 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.612 20:38:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.612 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.612 20:38:01 -- accel/accel.sh@20 -- # val=No 00:06:36.612 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.612 20:38:01 -- accel/accel.sh@20 -- # val= 00:06:36.612 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:36.612 20:38:01 -- accel/accel.sh@20 -- # val= 00:06:36.612 20:38:01 -- accel/accel.sh@21 -- # case "$var" in 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # IFS=: 00:06:36.612 20:38:01 -- accel/accel.sh@19 -- # read -r var val 00:06:37.555 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:37.555 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.555 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:37.555 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:37.555 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:37.555 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.555 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:37.555 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:37.555 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:37.555 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.555 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:37.555 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:37.555 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:37.555 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.555 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:37.555 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:37.555 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:37.555 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.555 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:37.555 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:37.555 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:37.555 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.555 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:37.555 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:37.555 20:38:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.555 20:38:02 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:37.555 20:38:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.555 00:06:37.555 real 0m1.304s 00:06:37.555 user 0m1.204s 00:06:37.555 sys 0m0.111s 00:06:37.555 20:38:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.555 20:38:02 -- common/autotest_common.sh@10 -- # set +x 00:06:37.555 ************************************ 00:06:37.555 END TEST accel_dif_generate_copy 00:06:37.555 ************************************ 00:06:37.555 20:38:02 -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:37.555 20:38:02 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.555 20:38:02 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:37.555 20:38:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:37.555 20:38:02 -- common/autotest_common.sh@10 -- # set +x 00:06:37.816 ************************************ 00:06:37.816 START TEST accel_comp 00:06:37.816 ************************************ 00:06:37.816 20:38:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.816 20:38:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.816 20:38:02 -- accel/accel.sh@17 -- # local accel_module 00:06:37.816 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:37.816 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:37.816 20:38:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.816 20:38:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.816 20:38:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.816 20:38:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.816 20:38:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.816 20:38:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.816 20:38:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.816 20:38:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.816 20:38:02 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.816 20:38:02 -- accel/accel.sh@41 -- # jq -r . 00:06:37.816 [2024-04-24 20:38:02.343669] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:37.816 [2024-04-24 20:38:02.343790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590024 ] 00:06:37.816 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.816 [2024-04-24 20:38:02.423879] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.077 [2024-04-24 20:38:02.499958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val=0x1 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val=compress 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@23 -- # accel_opc=compress 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val=software 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@22 -- # accel_module=software 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val=32 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.077 20:38:02 -- accel/accel.sh@20 -- # val=32 00:06:38.077 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.077 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.078 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.078 20:38:02 -- accel/accel.sh@20 -- # val=1 00:06:38.078 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.078 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.078 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.078 20:38:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.078 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.078 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.078 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.078 20:38:02 -- accel/accel.sh@20 -- # val=No 00:06:38.078 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.078 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.078 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.078 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:38.078 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.078 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.078 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:38.078 20:38:02 -- accel/accel.sh@20 -- # val= 00:06:38.078 20:38:02 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.078 20:38:02 -- accel/accel.sh@19 -- # IFS=: 00:06:38.078 20:38:02 -- accel/accel.sh@19 -- # read -r var val 00:06:39.022 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.022 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.022 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.022 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.022 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.022 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.022 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.022 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.022 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.022 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.022 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.022 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.022 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.022 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.022 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.022 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.022 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.022 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.022 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.022 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.022 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.022 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.022 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.022 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.022 20:38:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.022 20:38:03 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:39.022 20:38:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.022 00:06:39.022 real 0m1.316s 00:06:39.022 user 0m1.207s 00:06:39.022 sys 0m0.121s 00:06:39.022 20:38:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.022 20:38:03 -- common/autotest_common.sh@10 -- # set +x 00:06:39.022 ************************************ 00:06:39.022 END TEST accel_comp 00:06:39.022 ************************************ 00:06:39.283 20:38:03 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.283 20:38:03 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:39.283 20:38:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.283 20:38:03 -- common/autotest_common.sh@10 -- # set +x 00:06:39.283 ************************************ 00:06:39.283 START TEST accel_decomp 00:06:39.283 ************************************ 00:06:39.283 20:38:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.283 20:38:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.283 20:38:03 -- accel/accel.sh@17 -- # local accel_module 00:06:39.283 20:38:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.283 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.283 20:38:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:39.283 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.283 20:38:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.283 20:38:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.283 20:38:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.283 20:38:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.283 20:38:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.283 20:38:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.283 20:38:03 -- accel/accel.sh@40 -- # local IFS=, 00:06:39.283 20:38:03 -- accel/accel.sh@41 -- # jq -r . 00:06:39.283 [2024-04-24 20:38:03.807091] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:39.283 [2024-04-24 20:38:03.807130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590389 ] 00:06:39.283 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.283 [2024-04-24 20:38:03.876403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.544 [2024-04-24 20:38:03.942340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.544 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.544 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.544 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.544 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.544 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.544 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.544 20:38:03 -- accel/accel.sh@20 -- # val=0x1 00:06:39.544 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.544 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.544 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.544 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.544 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.544 20:38:03 -- accel/accel.sh@20 -- # val=decompress 00:06:39.544 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.544 20:38:03 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.544 20:38:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.544 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.544 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.544 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.544 20:38:03 -- accel/accel.sh@20 -- # val=software 00:06:39.544 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.544 20:38:03 -- accel/accel.sh@22 -- # accel_module=software 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.544 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.544 20:38:03 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.545 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:38:03 -- accel/accel.sh@20 -- # val=32 00:06:39.545 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:38:03 -- accel/accel.sh@20 -- # val=32 00:06:39.545 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:38:03 -- accel/accel.sh@20 -- # val=1 00:06:39.545 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:38:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.545 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:38:03 -- accel/accel.sh@20 -- # val=Yes 00:06:39.545 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.545 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:39.545 20:38:03 -- accel/accel.sh@20 -- # val= 00:06:39.545 20:38:03 -- accel/accel.sh@21 -- # case "$var" in 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # IFS=: 00:06:39.545 20:38:03 -- accel/accel.sh@19 -- # read -r var val 00:06:40.486 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:40.486 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.486 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:40.486 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:40.486 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:40.486 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.486 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:40.486 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:40.486 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:40.486 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.486 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:40.486 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:40.486 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:40.486 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.486 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:40.486 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:40.486 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:40.486 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.486 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:40.486 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:40.486 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:40.486 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:40.486 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:40.486 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:40.486 20:38:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.486 20:38:05 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.486 20:38:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.486 00:06:40.486 real 0m1.278s 00:06:40.486 user 0m1.193s 00:06:40.486 sys 0m0.097s 00:06:40.486 20:38:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:40.486 20:38:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.486 ************************************ 00:06:40.486 END TEST accel_decomp 00:06:40.486 ************************************ 00:06:40.486 20:38:05 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:40.486 20:38:05 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:40.486 20:38:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:40.486 20:38:05 -- common/autotest_common.sh@10 -- # set +x 00:06:40.746 ************************************ 00:06:40.746 START TEST accel_decmop_full 00:06:40.746 ************************************ 00:06:40.746 20:38:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:40.746 20:38:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:40.746 20:38:05 -- accel/accel.sh@17 -- # local accel_module 00:06:40.746 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:40.746 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:40.746 20:38:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:40.746 20:38:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:40.746 20:38:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.746 20:38:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.746 20:38:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.746 20:38:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.746 20:38:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.746 20:38:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.746 20:38:05 -- accel/accel.sh@40 -- # local IFS=, 00:06:40.746 20:38:05 -- accel/accel.sh@41 -- # jq -r . 00:06:40.746 [2024-04-24 20:38:05.262873] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:40.746 [2024-04-24 20:38:05.262983] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590732 ] 00:06:40.746 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.746 [2024-04-24 20:38:05.349712] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.007 [2024-04-24 20:38:05.425720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val=0x1 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val=decompress 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val=software 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@22 -- # accel_module=software 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val=32 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val=32 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val=1 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val=Yes 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.007 20:38:05 -- accel/accel.sh@20 -- # val= 00:06:41.007 20:38:05 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # IFS=: 00:06:41.007 20:38:05 -- accel/accel.sh@19 -- # read -r var val 00:06:41.947 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:41.947 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.947 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:41.947 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:41.947 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:41.947 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.947 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:41.947 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:41.947 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:41.947 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.947 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:41.947 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:41.947 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:41.947 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.947 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:41.947 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:41.947 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:41.947 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.947 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:41.947 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:41.947 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:41.947 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:41.947 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:41.947 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:41.947 20:38:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.947 20:38:06 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.947 20:38:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.947 00:06:41.947 real 0m1.339s 00:06:41.947 user 0m1.223s 00:06:41.947 sys 0m0.128s 00:06:41.947 20:38:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:41.947 20:38:06 -- common/autotest_common.sh@10 -- # set +x 00:06:41.947 ************************************ 00:06:41.947 END TEST accel_decmop_full 00:06:41.947 ************************************ 00:06:42.208 20:38:06 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.208 20:38:06 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:42.208 20:38:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.208 20:38:06 -- common/autotest_common.sh@10 -- # set +x 00:06:42.208 ************************************ 00:06:42.208 START TEST accel_decomp_mcore 00:06:42.208 ************************************ 00:06:42.208 20:38:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.208 20:38:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.208 20:38:06 -- accel/accel.sh@17 -- # local accel_module 00:06:42.208 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.208 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.208 20:38:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.208 20:38:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:42.208 20:38:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.208 20:38:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.208 20:38:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.208 20:38:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.208 20:38:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.208 20:38:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.208 20:38:06 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.208 20:38:06 -- accel/accel.sh@41 -- # jq -r . 00:06:42.208 [2024-04-24 20:38:06.740620] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:42.208 [2024-04-24 20:38:06.740681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590982 ] 00:06:42.208 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.208 [2024-04-24 20:38:06.818319] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.468 [2024-04-24 20:38:06.886361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.468 [2024-04-24 20:38:06.886495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.468 [2024-04-24 20:38:06.886640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.468 [2024-04-24 20:38:06.886640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.468 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val=0xf 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val=decompress 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val=software 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@22 -- # accel_module=software 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val=32 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val=32 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val=1 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val=Yes 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:42.469 20:38:06 -- accel/accel.sh@20 -- # val= 00:06:42.469 20:38:06 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # IFS=: 00:06:42.469 20:38:06 -- accel/accel.sh@19 -- # read -r var val 00:06:43.410 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.410 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.410 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.410 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.410 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.410 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.410 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.410 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.410 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.410 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.410 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.410 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.410 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.410 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.410 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.410 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.410 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.410 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.410 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.410 20:38:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.410 20:38:08 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:43.410 20:38:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.410 00:06:43.410 real 0m1.313s 00:06:43.410 user 0m4.442s 00:06:43.410 sys 0m0.118s 00:06:43.410 20:38:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:43.410 20:38:08 -- common/autotest_common.sh@10 -- # set +x 00:06:43.410 ************************************ 00:06:43.410 END TEST accel_decomp_mcore 00:06:43.410 ************************************ 00:06:43.671 20:38:08 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.671 20:38:08 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:43.671 20:38:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.671 20:38:08 -- common/autotest_common.sh@10 -- # set +x 00:06:43.671 ************************************ 00:06:43.671 START TEST accel_decomp_full_mcore 00:06:43.671 ************************************ 00:06:43.671 20:38:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.671 20:38:08 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.671 20:38:08 -- accel/accel.sh@17 -- # local accel_module 00:06:43.671 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.671 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.671 20:38:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.671 20:38:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.671 20:38:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.671 20:38:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:43.671 20:38:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.671 20:38:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.671 20:38:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.671 20:38:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.671 20:38:08 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.671 20:38:08 -- accel/accel.sh@41 -- # jq -r . 00:06:43.671 [2024-04-24 20:38:08.217679] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:43.671 [2024-04-24 20:38:08.217766] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591233 ] 00:06:43.671 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.671 [2024-04-24 20:38:08.278489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.932 [2024-04-24 20:38:08.344694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.932 [2024-04-24 20:38:08.344834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.932 [2024-04-24 20:38:08.344884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.932 [2024-04-24 20:38:08.344884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val=0xf 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val=decompress 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val=software 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@22 -- # accel_module=software 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val=32 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val=32 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val=1 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val=Yes 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:43.932 20:38:08 -- accel/accel.sh@20 -- # val= 00:06:43.932 20:38:08 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # IFS=: 00:06:43.932 20:38:08 -- accel/accel.sh@19 -- # read -r var val 00:06:44.875 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:44.875 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.875 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:44.875 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:44.875 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:44.875 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.875 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:44.875 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:44.875 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:44.875 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.875 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:44.875 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:44.875 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:44.875 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.875 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:44.875 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:44.875 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:44.875 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.875 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:44.875 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:44.875 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:44.876 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:44.876 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:44.876 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:44.876 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.876 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:44.876 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:44.876 20:38:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.876 20:38:09 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:44.876 20:38:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.876 00:06:44.876 real 0m1.306s 00:06:44.876 user 0m4.480s 00:06:44.876 sys 0m0.114s 00:06:44.876 20:38:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:44.876 20:38:09 -- common/autotest_common.sh@10 -- # set +x 00:06:44.876 ************************************ 00:06:44.876 END TEST accel_decomp_full_mcore 00:06:44.876 ************************************ 00:06:45.137 20:38:09 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.137 20:38:09 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:06:45.137 20:38:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.137 20:38:09 -- common/autotest_common.sh@10 -- # set +x 00:06:45.137 ************************************ 00:06:45.137 START TEST accel_decomp_mthread 00:06:45.137 ************************************ 00:06:45.137 20:38:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.137 20:38:09 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.137 20:38:09 -- accel/accel.sh@17 -- # local accel_module 00:06:45.137 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.137 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.137 20:38:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.137 20:38:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:45.137 20:38:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.137 20:38:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.137 20:38:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.137 20:38:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.137 20:38:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.137 20:38:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.137 20:38:09 -- accel/accel.sh@40 -- # local IFS=, 00:06:45.137 20:38:09 -- accel/accel.sh@41 -- # jq -r . 00:06:45.137 [2024-04-24 20:38:09.687226] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:45.137 [2024-04-24 20:38:09.687323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591514 ] 00:06:45.137 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.399 [2024-04-24 20:38:09.779947] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.399 [2024-04-24 20:38:09.858602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val=0x1 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val=decompress 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val=software 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@22 -- # accel_module=software 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val=32 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val=32 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val=2 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val=Yes 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:45.399 20:38:09 -- accel/accel.sh@20 -- # val= 00:06:45.399 20:38:09 -- accel/accel.sh@21 -- # case "$var" in 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # IFS=: 00:06:45.399 20:38:09 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:10 -- accel/accel.sh@20 -- # val= 00:06:46.787 20:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:10 -- accel/accel.sh@20 -- # val= 00:06:46.787 20:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:10 -- accel/accel.sh@20 -- # val= 00:06:46.787 20:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:10 -- accel/accel.sh@20 -- # val= 00:06:46.787 20:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:10 -- accel/accel.sh@20 -- # val= 00:06:46.787 20:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:10 -- accel/accel.sh@20 -- # val= 00:06:46.787 20:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:10 -- accel/accel.sh@20 -- # val= 00:06:46.787 20:38:10 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:10 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.787 20:38:10 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:46.787 20:38:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.787 00:06:46.787 real 0m1.338s 00:06:46.787 user 0m1.222s 00:06:46.787 sys 0m0.127s 00:06:46.787 20:38:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:46.787 20:38:10 -- common/autotest_common.sh@10 -- # set +x 00:06:46.787 ************************************ 00:06:46.787 END TEST accel_decomp_mthread 00:06:46.787 ************************************ 00:06:46.787 20:38:11 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.787 20:38:11 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:46.787 20:38:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.787 20:38:11 -- common/autotest_common.sh@10 -- # set +x 00:06:46.787 ************************************ 00:06:46.787 START TEST accel_deomp_full_mthread 00:06:46.787 ************************************ 00:06:46.787 20:38:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.787 20:38:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.787 20:38:11 -- accel/accel.sh@17 -- # local accel_module 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.787 20:38:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:46.787 20:38:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.787 20:38:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.787 20:38:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.787 20:38:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.787 20:38:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.787 20:38:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.787 20:38:11 -- accel/accel.sh@40 -- # local IFS=, 00:06:46.787 20:38:11 -- accel/accel.sh@41 -- # jq -r . 00:06:46.787 [2024-04-24 20:38:11.181590] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:46.787 [2024-04-24 20:38:11.181675] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591871 ] 00:06:46.787 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.787 [2024-04-24 20:38:11.241858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.787 [2024-04-24 20:38:11.304864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.787 20:38:11 -- accel/accel.sh@20 -- # val= 00:06:46.787 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:11 -- accel/accel.sh@20 -- # val= 00:06:46.787 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:11 -- accel/accel.sh@20 -- # val= 00:06:46.787 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:11 -- accel/accel.sh@20 -- # val=0x1 00:06:46.787 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:11 -- accel/accel.sh@20 -- # val= 00:06:46.787 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:11 -- accel/accel.sh@20 -- # val= 00:06:46.787 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.787 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.787 20:38:11 -- accel/accel.sh@20 -- # val=decompress 00:06:46.787 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.787 20:38:11 -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.788 20:38:11 -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:46.788 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.788 20:38:11 -- accel/accel.sh@20 -- # val= 00:06:46.788 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.788 20:38:11 -- accel/accel.sh@20 -- # val=software 00:06:46.788 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.788 20:38:11 -- accel/accel.sh@22 -- # accel_module=software 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.788 20:38:11 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:46.788 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.788 20:38:11 -- accel/accel.sh@20 -- # val=32 00:06:46.788 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.788 20:38:11 -- accel/accel.sh@20 -- # val=32 00:06:46.788 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.788 20:38:11 -- accel/accel.sh@20 -- # val=2 00:06:46.788 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.788 20:38:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.788 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.788 20:38:11 -- accel/accel.sh@20 -- # val=Yes 00:06:46.788 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.788 20:38:11 -- accel/accel.sh@20 -- # val= 00:06:46.788 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:46.788 20:38:11 -- accel/accel.sh@20 -- # val= 00:06:46.788 20:38:11 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # IFS=: 00:06:46.788 20:38:11 -- accel/accel.sh@19 -- # read -r var val 00:06:48.173 20:38:12 -- accel/accel.sh@20 -- # val= 00:06:48.173 20:38:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # IFS=: 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # read -r var val 00:06:48.173 20:38:12 -- accel/accel.sh@20 -- # val= 00:06:48.173 20:38:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # IFS=: 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # read -r var val 00:06:48.173 20:38:12 -- accel/accel.sh@20 -- # val= 00:06:48.173 20:38:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # IFS=: 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # read -r var val 00:06:48.173 20:38:12 -- accel/accel.sh@20 -- # val= 00:06:48.173 20:38:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # IFS=: 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # read -r var val 00:06:48.173 20:38:12 -- accel/accel.sh@20 -- # val= 00:06:48.173 20:38:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # IFS=: 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # read -r var val 00:06:48.173 20:38:12 -- accel/accel.sh@20 -- # val= 00:06:48.173 20:38:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # IFS=: 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # read -r var val 00:06:48.173 20:38:12 -- accel/accel.sh@20 -- # val= 00:06:48.173 20:38:12 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # IFS=: 00:06:48.173 20:38:12 -- accel/accel.sh@19 -- # read -r var val 00:06:48.173 20:38:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.173 20:38:12 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:48.173 20:38:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.173 00:06:48.173 real 0m1.316s 00:06:48.173 user 0m1.227s 00:06:48.173 sys 0m0.101s 00:06:48.173 20:38:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:48.173 20:38:12 -- common/autotest_common.sh@10 -- # set +x 00:06:48.173 ************************************ 00:06:48.173 END TEST accel_deomp_full_mthread 00:06:48.173 ************************************ 00:06:48.173 20:38:12 -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:48.173 20:38:12 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:48.173 20:38:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:48.173 20:38:12 -- accel/accel.sh@137 -- # build_accel_config 00:06:48.173 20:38:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.173 20:38:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.173 20:38:12 -- common/autotest_common.sh@10 -- # set +x 00:06:48.173 20:38:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.173 20:38:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.173 20:38:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.173 20:38:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.173 20:38:12 -- accel/accel.sh@40 -- # local IFS=, 00:06:48.173 20:38:12 -- accel/accel.sh@41 -- # jq -r . 00:06:48.173 ************************************ 00:06:48.173 START TEST accel_dif_functional_tests 00:06:48.173 ************************************ 00:06:48.173 20:38:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:48.173 [2024-04-24 20:38:12.686641] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:48.173 [2024-04-24 20:38:12.686683] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592235 ] 00:06:48.173 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.173 [2024-04-24 20:38:12.762232] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.432 [2024-04-24 20:38:12.829296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.432 [2024-04-24 20:38:12.829438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.432 [2024-04-24 20:38:12.829441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.432 00:06:48.432 00:06:48.432 CUnit - A unit testing framework for C - Version 2.1-3 00:06:48.432 http://cunit.sourceforge.net/ 00:06:48.432 00:06:48.432 00:06:48.432 Suite: accel_dif 00:06:48.432 Test: verify: DIF generated, GUARD check ...passed 00:06:48.432 Test: verify: DIF generated, APPTAG check ...passed 00:06:48.432 Test: verify: DIF generated, REFTAG check ...passed 00:06:48.432 Test: verify: DIF not generated, GUARD check ...[2024-04-24 20:38:12.884821] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:48.432 [2024-04-24 20:38:12.884857] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:48.432 passed 00:06:48.432 Test: verify: DIF not generated, APPTAG check ...[2024-04-24 20:38:12.884886] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:48.432 [2024-04-24 20:38:12.884900] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:48.432 passed 00:06:48.432 Test: verify: DIF not generated, REFTAG check ...[2024-04-24 20:38:12.884916] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:48.432 [2024-04-24 20:38:12.884931] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:48.432 passed 00:06:48.432 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:48.432 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-24 20:38:12.884977] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:48.432 passed 00:06:48.432 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:48.432 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:48.432 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:48.432 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-24 20:38:12.885090] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:48.432 passed 00:06:48.432 Test: generate copy: DIF generated, GUARD check ...passed 00:06:48.432 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:48.432 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:48.432 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:48.432 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:48.432 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:48.432 Test: generate copy: iovecs-len validate ...[2024-04-24 20:38:12.885278] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:48.432 passed 00:06:48.432 Test: generate copy: buffer alignment validate ...passed 00:06:48.432 00:06:48.432 Run Summary: Type Total Ran Passed Failed Inactive 00:06:48.432 suites 1 1 n/a 0 0 00:06:48.432 tests 20 20 20 0 0 00:06:48.432 asserts 204 204 204 0 n/a 00:06:48.432 00:06:48.432 Elapsed time = 0.002 seconds 00:06:48.432 00:06:48.432 real 0m0.362s 00:06:48.432 user 0m0.437s 00:06:48.432 sys 0m0.145s 00:06:48.432 20:38:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:48.432 20:38:13 -- common/autotest_common.sh@10 -- # set +x 00:06:48.432 ************************************ 00:06:48.432 END TEST accel_dif_functional_tests 00:06:48.432 ************************************ 00:06:48.432 00:06:48.432 real 0m32.365s 00:06:48.432 user 0m34.078s 00:06:48.432 sys 0m5.398s 00:06:48.432 20:38:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:48.432 20:38:13 -- common/autotest_common.sh@10 -- # set +x 00:06:48.432 ************************************ 00:06:48.432 END TEST accel 00:06:48.432 ************************************ 00:06:48.692 20:38:13 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:48.692 20:38:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:48.692 20:38:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.692 20:38:13 -- common/autotest_common.sh@10 -- # set +x 00:06:48.692 ************************************ 00:06:48.692 START TEST accel_rpc 00:06:48.692 ************************************ 00:06:48.692 20:38:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:48.692 * Looking for test storage... 00:06:48.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:48.692 20:38:13 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:48.692 20:38:13 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2592372 00:06:48.692 20:38:13 -- accel/accel_rpc.sh@15 -- # waitforlisten 2592372 00:06:48.692 20:38:13 -- common/autotest_common.sh@817 -- # '[' -z 2592372 ']' 00:06:48.692 20:38:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.692 20:38:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:48.692 20:38:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.692 20:38:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:48.692 20:38:13 -- common/autotest_common.sh@10 -- # set +x 00:06:48.692 20:38:13 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:48.953 [2024-04-24 20:38:13.373998] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:48.953 [2024-04-24 20:38:13.374063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592372 ] 00:06:48.953 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.953 [2024-04-24 20:38:13.455315] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.953 [2024-04-24 20:38:13.527601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.920 20:38:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:49.920 20:38:14 -- common/autotest_common.sh@850 -- # return 0 00:06:49.920 20:38:14 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:49.920 20:38:14 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:49.920 20:38:14 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:49.920 20:38:14 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:49.920 20:38:14 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:49.920 20:38:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:49.920 20:38:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.920 20:38:14 -- common/autotest_common.sh@10 -- # set +x 00:06:49.920 ************************************ 00:06:49.920 START TEST accel_assign_opcode 00:06:49.920 ************************************ 00:06:49.920 20:38:14 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:06:49.920 20:38:14 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:49.920 20:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:49.920 20:38:14 -- common/autotest_common.sh@10 -- # set +x 00:06:49.920 [2024-04-24 20:38:14.337914] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:49.920 20:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:49.920 20:38:14 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:49.920 20:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:49.920 20:38:14 -- common/autotest_common.sh@10 -- # set +x 00:06:49.920 [2024-04-24 20:38:14.345927] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:49.920 20:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:49.920 20:38:14 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:49.920 20:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:49.920 20:38:14 -- common/autotest_common.sh@10 -- # set +x 00:06:49.920 20:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:49.920 20:38:14 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:49.920 20:38:14 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:49.920 20:38:14 -- accel/accel_rpc.sh@42 -- # grep software 00:06:49.920 20:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:49.920 20:38:14 -- common/autotest_common.sh@10 -- # set +x 00:06:49.920 20:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:49.920 software 00:06:49.920 00:06:49.920 real 0m0.203s 00:06:49.920 user 0m0.045s 00:06:49.920 sys 0m0.010s 00:06:49.920 20:38:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.920 20:38:14 -- common/autotest_common.sh@10 -- # set +x 00:06:49.920 ************************************ 00:06:49.920 END TEST accel_assign_opcode 00:06:49.920 ************************************ 00:06:50.181 20:38:14 -- accel/accel_rpc.sh@55 -- # killprocess 2592372 00:06:50.181 20:38:14 -- common/autotest_common.sh@936 -- # '[' -z 2592372 ']' 00:06:50.181 20:38:14 -- common/autotest_common.sh@940 -- # kill -0 2592372 00:06:50.181 20:38:14 -- common/autotest_common.sh@941 -- # uname 00:06:50.181 20:38:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:50.181 20:38:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2592372 00:06:50.181 20:38:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:50.181 20:38:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:50.181 20:38:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2592372' 00:06:50.181 killing process with pid 2592372 00:06:50.181 20:38:14 -- common/autotest_common.sh@955 -- # kill 2592372 00:06:50.181 20:38:14 -- common/autotest_common.sh@960 -- # wait 2592372 00:06:50.440 00:06:50.440 real 0m1.613s 00:06:50.440 user 0m1.797s 00:06:50.440 sys 0m0.447s 00:06:50.440 20:38:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:50.440 20:38:14 -- common/autotest_common.sh@10 -- # set +x 00:06:50.440 ************************************ 00:06:50.440 END TEST accel_rpc 00:06:50.440 ************************************ 00:06:50.440 20:38:14 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:50.440 20:38:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:50.440 20:38:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.440 20:38:14 -- common/autotest_common.sh@10 -- # set +x 00:06:50.440 ************************************ 00:06:50.440 START TEST app_cmdline 00:06:50.440 ************************************ 00:06:50.440 20:38:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:50.700 * Looking for test storage... 00:06:50.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:50.700 20:38:15 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:50.700 20:38:15 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2592822 00:06:50.700 20:38:15 -- app/cmdline.sh@18 -- # waitforlisten 2592822 00:06:50.700 20:38:15 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:50.700 20:38:15 -- common/autotest_common.sh@817 -- # '[' -z 2592822 ']' 00:06:50.700 20:38:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.700 20:38:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:50.700 20:38:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.700 20:38:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:50.700 20:38:15 -- common/autotest_common.sh@10 -- # set +x 00:06:50.700 [2024-04-24 20:38:15.150420] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:06:50.700 [2024-04-24 20:38:15.150488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592822 ] 00:06:50.700 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.700 [2024-04-24 20:38:15.229704] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.700 [2024-04-24 20:38:15.299322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.641 20:38:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:51.641 20:38:16 -- common/autotest_common.sh@850 -- # return 0 00:06:51.642 20:38:16 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:51.642 { 00:06:51.642 "version": "SPDK v24.05-pre git sha1 68e12c8e2", 00:06:51.642 "fields": { 00:06:51.642 "major": 24, 00:06:51.642 "minor": 5, 00:06:51.642 "patch": 0, 00:06:51.642 "suffix": "-pre", 00:06:51.642 "commit": "68e12c8e2" 00:06:51.642 } 00:06:51.642 } 00:06:51.642 20:38:16 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:51.642 20:38:16 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:51.642 20:38:16 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:51.642 20:38:16 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:51.642 20:38:16 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:51.642 20:38:16 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:51.642 20:38:16 -- app/cmdline.sh@26 -- # sort 00:06:51.642 20:38:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:51.642 20:38:16 -- common/autotest_common.sh@10 -- # set +x 00:06:51.642 20:38:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:51.642 20:38:16 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:51.642 20:38:16 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:51.642 20:38:16 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.642 20:38:16 -- common/autotest_common.sh@638 -- # local es=0 00:06:51.642 20:38:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.642 20:38:16 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.642 20:38:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:51.642 20:38:16 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.642 20:38:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:51.642 20:38:16 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.642 20:38:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:51.642 20:38:16 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:51.642 20:38:16 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:51.642 20:38:16 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:51.906 request: 00:06:51.906 { 00:06:51.906 "method": "env_dpdk_get_mem_stats", 00:06:51.906 "req_id": 1 00:06:51.906 } 00:06:51.906 Got JSON-RPC error response 00:06:51.906 response: 00:06:51.906 { 00:06:51.906 "code": -32601, 00:06:51.906 "message": "Method not found" 00:06:51.906 } 00:06:51.906 20:38:16 -- common/autotest_common.sh@641 -- # es=1 00:06:51.906 20:38:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:51.906 20:38:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:51.906 20:38:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:51.907 20:38:16 -- app/cmdline.sh@1 -- # killprocess 2592822 00:06:51.907 20:38:16 -- common/autotest_common.sh@936 -- # '[' -z 2592822 ']' 00:06:51.907 20:38:16 -- common/autotest_common.sh@940 -- # kill -0 2592822 00:06:51.907 20:38:16 -- common/autotest_common.sh@941 -- # uname 00:06:51.907 20:38:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:51.907 20:38:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2592822 00:06:51.907 20:38:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:51.907 20:38:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:51.907 20:38:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2592822' 00:06:51.907 killing process with pid 2592822 00:06:51.907 20:38:16 -- common/autotest_common.sh@955 -- # kill 2592822 00:06:51.907 20:38:16 -- common/autotest_common.sh@960 -- # wait 2592822 00:06:52.167 00:06:52.167 real 0m1.722s 00:06:52.167 user 0m2.167s 00:06:52.167 sys 0m0.426s 00:06:52.167 20:38:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.167 20:38:16 -- common/autotest_common.sh@10 -- # set +x 00:06:52.167 ************************************ 00:06:52.167 END TEST app_cmdline 00:06:52.167 ************************************ 00:06:52.168 20:38:16 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:52.168 20:38:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.168 20:38:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.168 20:38:16 -- common/autotest_common.sh@10 -- # set +x 00:06:52.431 ************************************ 00:06:52.431 START TEST version 00:06:52.431 ************************************ 00:06:52.431 20:38:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:52.431 * Looking for test storage... 00:06:52.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:52.431 20:38:16 -- app/version.sh@17 -- # get_header_version major 00:06:52.431 20:38:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:52.431 20:38:16 -- app/version.sh@14 -- # cut -f2 00:06:52.431 20:38:16 -- app/version.sh@14 -- # tr -d '"' 00:06:52.431 20:38:16 -- app/version.sh@17 -- # major=24 00:06:52.431 20:38:16 -- app/version.sh@18 -- # get_header_version minor 00:06:52.431 20:38:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:52.431 20:38:16 -- app/version.sh@14 -- # cut -f2 00:06:52.431 20:38:16 -- app/version.sh@14 -- # tr -d '"' 00:06:52.431 20:38:16 -- app/version.sh@18 -- # minor=5 00:06:52.431 20:38:16 -- app/version.sh@19 -- # get_header_version patch 00:06:52.431 20:38:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:52.431 20:38:16 -- app/version.sh@14 -- # cut -f2 00:06:52.431 20:38:16 -- app/version.sh@14 -- # tr -d '"' 00:06:52.431 20:38:16 -- app/version.sh@19 -- # patch=0 00:06:52.431 20:38:16 -- app/version.sh@20 -- # get_header_version suffix 00:06:52.431 20:38:17 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:52.431 20:38:17 -- app/version.sh@14 -- # cut -f2 00:06:52.431 20:38:17 -- app/version.sh@14 -- # tr -d '"' 00:06:52.431 20:38:17 -- app/version.sh@20 -- # suffix=-pre 00:06:52.431 20:38:17 -- app/version.sh@22 -- # version=24.5 00:06:52.431 20:38:17 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:52.431 20:38:17 -- app/version.sh@28 -- # version=24.5rc0 00:06:52.431 20:38:17 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:52.431 20:38:17 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:52.431 20:38:17 -- app/version.sh@30 -- # py_version=24.5rc0 00:06:52.431 20:38:17 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:52.431 00:06:52.431 real 0m0.187s 00:06:52.431 user 0m0.092s 00:06:52.431 sys 0m0.134s 00:06:52.431 20:38:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:52.431 20:38:17 -- common/autotest_common.sh@10 -- # set +x 00:06:52.431 ************************************ 00:06:52.431 END TEST version 00:06:52.431 ************************************ 00:06:52.691 20:38:17 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:52.691 20:38:17 -- spdk/autotest.sh@194 -- # uname -s 00:06:52.691 20:38:17 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:52.691 20:38:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:52.691 20:38:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:52.691 20:38:17 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:52.691 20:38:17 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:52.691 20:38:17 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:52.691 20:38:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:52.691 20:38:17 -- common/autotest_common.sh@10 -- # set +x 00:06:52.691 20:38:17 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:52.691 20:38:17 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:52.691 20:38:17 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:52.691 20:38:17 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:52.691 20:38:17 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:52.691 20:38:17 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:52.691 20:38:17 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:52.691 20:38:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:52.691 20:38:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.691 20:38:17 -- common/autotest_common.sh@10 -- # set +x 00:06:52.691 ************************************ 00:06:52.691 START TEST nvmf_tcp 00:06:52.691 ************************************ 00:06:52.691 20:38:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:52.952 * Looking for test storage... 00:06:52.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:52.952 20:38:17 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:52.952 20:38:17 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:52.952 20:38:17 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.952 20:38:17 -- nvmf/common.sh@7 -- # uname -s 00:06:52.952 20:38:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.952 20:38:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.952 20:38:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.952 20:38:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.952 20:38:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.952 20:38:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.952 20:38:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.952 20:38:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.952 20:38:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.953 20:38:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.953 20:38:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:06:52.953 20:38:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:06:52.953 20:38:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.953 20:38:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.953 20:38:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.953 20:38:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.953 20:38:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.953 20:38:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.953 20:38:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.953 20:38:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.953 20:38:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.953 20:38:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.953 20:38:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.953 20:38:17 -- paths/export.sh@5 -- # export PATH 00:06:52.953 20:38:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.953 20:38:17 -- nvmf/common.sh@47 -- # : 0 00:06:52.953 20:38:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:52.953 20:38:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:52.953 20:38:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.953 20:38:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.953 20:38:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.953 20:38:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:52.953 20:38:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:52.953 20:38:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:52.953 20:38:17 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:52.953 20:38:17 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:52.953 20:38:17 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:52.953 20:38:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:52.953 20:38:17 -- common/autotest_common.sh@10 -- # set +x 00:06:52.953 20:38:17 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:52.953 20:38:17 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:52.953 20:38:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:52.953 20:38:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.953 20:38:17 -- common/autotest_common.sh@10 -- # set +x 00:06:52.953 ************************************ 00:06:52.953 START TEST nvmf_example 00:06:52.953 ************************************ 00:06:52.953 20:38:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:53.215 * Looking for test storage... 00:06:53.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.215 20:38:17 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.215 20:38:17 -- nvmf/common.sh@7 -- # uname -s 00:06:53.215 20:38:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.215 20:38:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.215 20:38:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.215 20:38:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.215 20:38:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.215 20:38:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.215 20:38:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.215 20:38:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.215 20:38:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.215 20:38:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.215 20:38:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:06:53.215 20:38:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:06:53.215 20:38:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.215 20:38:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.215 20:38:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.215 20:38:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.215 20:38:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.215 20:38:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.215 20:38:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.215 20:38:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.215 20:38:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.215 20:38:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.215 20:38:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.215 20:38:17 -- paths/export.sh@5 -- # export PATH 00:06:53.215 20:38:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.215 20:38:17 -- nvmf/common.sh@47 -- # : 0 00:06:53.215 20:38:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.215 20:38:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.215 20:38:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.215 20:38:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.215 20:38:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.215 20:38:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.215 20:38:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.215 20:38:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.215 20:38:17 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:53.215 20:38:17 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:53.215 20:38:17 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:53.215 20:38:17 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:53.215 20:38:17 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:53.215 20:38:17 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:53.215 20:38:17 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:53.215 20:38:17 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:53.215 20:38:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:53.215 20:38:17 -- common/autotest_common.sh@10 -- # set +x 00:06:53.215 20:38:17 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:53.215 20:38:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:06:53.215 20:38:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.215 20:38:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:06:53.215 20:38:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:06:53.215 20:38:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:06:53.215 20:38:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.215 20:38:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.215 20:38:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.215 20:38:17 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:06:53.215 20:38:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:06:53.215 20:38:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:06:53.215 20:38:17 -- common/autotest_common.sh@10 -- # set +x 00:07:01.358 20:38:24 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:01.358 20:38:24 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:01.358 20:38:24 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:01.358 20:38:24 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:01.358 20:38:24 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:01.358 20:38:24 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:01.358 20:38:24 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:01.358 20:38:24 -- nvmf/common.sh@295 -- # net_devs=() 00:07:01.358 20:38:24 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:01.358 20:38:24 -- nvmf/common.sh@296 -- # e810=() 00:07:01.358 20:38:24 -- nvmf/common.sh@296 -- # local -ga e810 00:07:01.358 20:38:24 -- nvmf/common.sh@297 -- # x722=() 00:07:01.358 20:38:24 -- nvmf/common.sh@297 -- # local -ga x722 00:07:01.358 20:38:24 -- nvmf/common.sh@298 -- # mlx=() 00:07:01.358 20:38:24 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:01.358 20:38:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.358 20:38:24 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.358 20:38:24 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.358 20:38:24 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.358 20:38:24 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.358 20:38:24 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.358 20:38:24 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.358 20:38:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.358 20:38:24 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.358 20:38:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.358 20:38:24 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.358 20:38:24 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:01.358 20:38:24 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:01.358 20:38:24 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:01.358 20:38:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.358 20:38:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:01.358 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:01.358 20:38:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:01.358 20:38:24 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:01.358 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:01.358 20:38:24 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:01.358 20:38:24 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:01.358 20:38:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.358 20:38:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.358 20:38:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:01.358 20:38:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.358 20:38:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:01.358 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:01.358 20:38:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.358 20:38:24 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:01.359 20:38:24 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.359 20:38:24 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:01.359 20:38:24 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.359 20:38:24 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:01.359 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:01.359 20:38:24 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.359 20:38:24 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:01.359 20:38:24 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:01.359 20:38:24 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:01.359 20:38:24 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:01.359 20:38:24 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:01.359 20:38:24 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.359 20:38:24 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.359 20:38:24 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.359 20:38:24 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:01.359 20:38:24 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.359 20:38:24 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.359 20:38:24 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:01.359 20:38:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.359 20:38:24 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.359 20:38:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:01.359 20:38:24 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:01.359 20:38:24 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.359 20:38:24 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.359 20:38:24 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.359 20:38:24 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.359 20:38:24 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:01.359 20:38:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.359 20:38:24 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.359 20:38:24 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.359 20:38:24 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:01.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.590 ms 00:07:01.359 00:07:01.359 --- 10.0.0.2 ping statistics --- 00:07:01.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.359 rtt min/avg/max/mdev = 0.590/0.590/0.590/0.000 ms 00:07:01.359 20:38:24 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:07:01.359 00:07:01.359 --- 10.0.0.1 ping statistics --- 00:07:01.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.359 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:07:01.359 20:38:24 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.359 20:38:24 -- nvmf/common.sh@411 -- # return 0 00:07:01.359 20:38:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:01.359 20:38:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.359 20:38:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:01.359 20:38:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:01.359 20:38:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.359 20:38:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:01.359 20:38:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:01.359 20:38:25 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:01.359 20:38:25 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:01.359 20:38:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:01.359 20:38:25 -- common/autotest_common.sh@10 -- # set +x 00:07:01.359 20:38:25 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:01.359 20:38:25 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:01.359 20:38:25 -- target/nvmf_example.sh@34 -- # nvmfpid=2597184 00:07:01.359 20:38:25 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:01.359 20:38:25 -- target/nvmf_example.sh@36 -- # waitforlisten 2597184 00:07:01.359 20:38:25 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:01.359 20:38:25 -- common/autotest_common.sh@817 -- # '[' -z 2597184 ']' 00:07:01.359 20:38:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.359 20:38:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:01.359 20:38:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.359 20:38:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:01.359 20:38:25 -- common/autotest_common.sh@10 -- # set +x 00:07:01.359 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.359 20:38:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:01.359 20:38:25 -- common/autotest_common.sh@850 -- # return 0 00:07:01.359 20:38:25 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:01.359 20:38:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:01.359 20:38:25 -- common/autotest_common.sh@10 -- # set +x 00:07:01.621 20:38:25 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.621 20:38:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.621 20:38:26 -- common/autotest_common.sh@10 -- # set +x 00:07:01.621 20:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.621 20:38:26 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:01.621 20:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.621 20:38:26 -- common/autotest_common.sh@10 -- # set +x 00:07:01.621 20:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.621 20:38:26 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:01.621 20:38:26 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:01.621 20:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.621 20:38:26 -- common/autotest_common.sh@10 -- # set +x 00:07:01.621 20:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.621 20:38:26 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:01.621 20:38:26 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:01.621 20:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.621 20:38:26 -- common/autotest_common.sh@10 -- # set +x 00:07:01.621 20:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.621 20:38:26 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.621 20:38:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:01.621 20:38:26 -- common/autotest_common.sh@10 -- # set +x 00:07:01.621 20:38:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:01.621 20:38:26 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:01.621 20:38:26 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:01.621 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.627 Initializing NVMe Controllers 00:07:11.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:11.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:11.627 Initialization complete. Launching workers. 00:07:11.627 ======================================================== 00:07:11.628 Latency(us) 00:07:11.628 Device Information : IOPS MiB/s Average min max 00:07:11.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17072.69 66.69 3748.33 698.30 15275.78 00:07:11.628 ======================================================== 00:07:11.628 Total : 17072.69 66.69 3748.33 698.30 15275.78 00:07:11.628 00:07:11.628 20:38:36 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:11.628 20:38:36 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:11.628 20:38:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:11.628 20:38:36 -- nvmf/common.sh@117 -- # sync 00:07:11.628 20:38:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:11.628 20:38:36 -- nvmf/common.sh@120 -- # set +e 00:07:11.628 20:38:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:11.628 20:38:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:11.889 rmmod nvme_tcp 00:07:11.889 rmmod nvme_fabrics 00:07:11.889 rmmod nvme_keyring 00:07:11.889 20:38:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:11.889 20:38:36 -- nvmf/common.sh@124 -- # set -e 00:07:11.889 20:38:36 -- nvmf/common.sh@125 -- # return 0 00:07:11.889 20:38:36 -- nvmf/common.sh@478 -- # '[' -n 2597184 ']' 00:07:11.889 20:38:36 -- nvmf/common.sh@479 -- # killprocess 2597184 00:07:11.889 20:38:36 -- common/autotest_common.sh@936 -- # '[' -z 2597184 ']' 00:07:11.889 20:38:36 -- common/autotest_common.sh@940 -- # kill -0 2597184 00:07:11.889 20:38:36 -- common/autotest_common.sh@941 -- # uname 00:07:11.889 20:38:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:11.889 20:38:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2597184 00:07:11.889 20:38:36 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:07:11.889 20:38:36 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:07:11.889 20:38:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2597184' 00:07:11.889 killing process with pid 2597184 00:07:11.889 20:38:36 -- common/autotest_common.sh@955 -- # kill 2597184 00:07:11.889 20:38:36 -- common/autotest_common.sh@960 -- # wait 2597184 00:07:11.889 nvmf threads initialize successfully 00:07:11.889 bdev subsystem init successfully 00:07:11.889 created a nvmf target service 00:07:11.889 create targets's poll groups done 00:07:11.889 all subsystems of target started 00:07:11.889 nvmf target is running 00:07:11.889 all subsystems of target stopped 00:07:11.889 destroy targets's poll groups done 00:07:11.889 destroyed the nvmf target service 00:07:11.889 bdev subsystem finish successfully 00:07:11.889 nvmf threads destroy successfully 00:07:11.889 20:38:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:11.889 20:38:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:11.889 20:38:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:11.889 20:38:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.889 20:38:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:11.889 20:38:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.889 20:38:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:11.889 20:38:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.448 20:38:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:14.448 20:38:38 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:14.448 20:38:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:14.448 20:38:38 -- common/autotest_common.sh@10 -- # set +x 00:07:14.448 00:07:14.448 real 0m21.082s 00:07:14.448 user 0m46.564s 00:07:14.448 sys 0m6.651s 00:07:14.448 20:38:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:14.448 20:38:38 -- common/autotest_common.sh@10 -- # set +x 00:07:14.448 ************************************ 00:07:14.448 END TEST nvmf_example 00:07:14.448 ************************************ 00:07:14.448 20:38:38 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:14.448 20:38:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:14.448 20:38:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.448 20:38:38 -- common/autotest_common.sh@10 -- # set +x 00:07:14.448 ************************************ 00:07:14.448 START TEST nvmf_filesystem 00:07:14.448 ************************************ 00:07:14.448 20:38:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:14.448 * Looking for test storage... 00:07:14.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.448 20:38:38 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:14.448 20:38:38 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:14.448 20:38:38 -- common/autotest_common.sh@34 -- # set -e 00:07:14.448 20:38:38 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:14.448 20:38:38 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:14.448 20:38:38 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:14.448 20:38:38 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:14.448 20:38:38 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:14.448 20:38:38 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:14.448 20:38:38 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:14.448 20:38:38 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:14.448 20:38:38 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:14.448 20:38:38 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:14.448 20:38:38 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:14.448 20:38:38 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:14.448 20:38:38 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:14.448 20:38:38 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:14.448 20:38:38 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:14.448 20:38:38 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:14.448 20:38:38 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:14.448 20:38:38 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:14.448 20:38:38 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:14.448 20:38:38 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:14.448 20:38:38 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:14.448 20:38:38 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:14.448 20:38:38 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:14.448 20:38:38 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:14.448 20:38:38 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:14.448 20:38:38 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:14.448 20:38:38 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:14.448 20:38:38 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:14.448 20:38:38 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:14.448 20:38:38 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:14.448 20:38:38 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:14.448 20:38:38 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:14.448 20:38:38 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:14.448 20:38:38 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:14.448 20:38:38 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:14.448 20:38:38 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:14.448 20:38:38 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:14.449 20:38:38 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:14.449 20:38:38 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:14.449 20:38:38 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:14.449 20:38:38 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:14.449 20:38:38 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:14.449 20:38:38 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:14.449 20:38:38 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:14.449 20:38:38 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:14.449 20:38:38 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:14.449 20:38:38 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:14.449 20:38:38 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:14.449 20:38:38 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:14.449 20:38:38 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:14.449 20:38:38 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:14.449 20:38:38 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:14.449 20:38:38 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:14.449 20:38:38 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:14.449 20:38:38 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:14.449 20:38:38 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:14.449 20:38:38 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:14.449 20:38:38 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:14.449 20:38:38 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:07:14.449 20:38:38 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:14.449 20:38:38 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:14.449 20:38:38 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:14.449 20:38:38 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:14.449 20:38:38 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:14.449 20:38:38 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:14.449 20:38:38 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:14.449 20:38:38 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:07:14.449 20:38:38 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:14.449 20:38:38 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:14.449 20:38:38 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:14.449 20:38:38 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:14.449 20:38:38 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:14.449 20:38:38 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:14.449 20:38:38 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:14.449 20:38:38 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:14.449 20:38:38 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:14.449 20:38:38 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:14.449 20:38:38 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:14.449 20:38:38 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:14.449 20:38:38 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:14.449 20:38:38 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:14.449 20:38:38 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:14.449 20:38:38 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:14.449 20:38:38 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:14.449 20:38:38 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:14.449 20:38:38 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:14.449 20:38:38 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:07:14.449 20:38:38 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:14.449 20:38:38 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:14.449 20:38:38 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:14.449 20:38:38 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:14.449 20:38:38 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:14.449 20:38:38 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:14.449 20:38:38 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:14.449 20:38:38 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:14.449 20:38:38 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:14.449 20:38:38 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:14.449 20:38:38 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:14.449 20:38:38 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:14.449 20:38:38 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:14.449 20:38:38 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:14.449 20:38:38 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:14.449 20:38:38 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:14.449 #define SPDK_CONFIG_H 00:07:14.449 #define SPDK_CONFIG_APPS 1 00:07:14.449 #define SPDK_CONFIG_ARCH native 00:07:14.449 #undef SPDK_CONFIG_ASAN 00:07:14.449 #undef SPDK_CONFIG_AVAHI 00:07:14.449 #undef SPDK_CONFIG_CET 00:07:14.449 #define SPDK_CONFIG_COVERAGE 1 00:07:14.449 #define SPDK_CONFIG_CROSS_PREFIX 00:07:14.449 #undef SPDK_CONFIG_CRYPTO 00:07:14.449 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:14.449 #undef SPDK_CONFIG_CUSTOMOCF 00:07:14.449 #undef SPDK_CONFIG_DAOS 00:07:14.449 #define SPDK_CONFIG_DAOS_DIR 00:07:14.449 #define SPDK_CONFIG_DEBUG 1 00:07:14.449 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:14.449 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:14.449 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:14.449 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:14.449 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:14.449 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:14.449 #define SPDK_CONFIG_EXAMPLES 1 00:07:14.449 #undef SPDK_CONFIG_FC 00:07:14.449 #define SPDK_CONFIG_FC_PATH 00:07:14.449 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:14.449 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:14.449 #undef SPDK_CONFIG_FUSE 00:07:14.449 #undef SPDK_CONFIG_FUZZER 00:07:14.449 #define SPDK_CONFIG_FUZZER_LIB 00:07:14.449 #undef SPDK_CONFIG_GOLANG 00:07:14.449 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:14.449 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:14.449 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:14.449 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:07:14.449 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:14.449 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:14.449 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:14.449 #define SPDK_CONFIG_IDXD 1 00:07:14.449 #undef SPDK_CONFIG_IDXD_KERNEL 00:07:14.449 #undef SPDK_CONFIG_IPSEC_MB 00:07:14.449 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:14.449 #define SPDK_CONFIG_ISAL 1 00:07:14.449 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:14.449 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:14.449 #define SPDK_CONFIG_LIBDIR 00:07:14.449 #undef SPDK_CONFIG_LTO 00:07:14.449 #define SPDK_CONFIG_MAX_LCORES 00:07:14.449 #define SPDK_CONFIG_NVME_CUSE 1 00:07:14.449 #undef SPDK_CONFIG_OCF 00:07:14.449 #define SPDK_CONFIG_OCF_PATH 00:07:14.449 #define SPDK_CONFIG_OPENSSL_PATH 00:07:14.449 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:14.449 #define SPDK_CONFIG_PGO_DIR 00:07:14.449 #undef SPDK_CONFIG_PGO_USE 00:07:14.449 #define SPDK_CONFIG_PREFIX /usr/local 00:07:14.449 #undef SPDK_CONFIG_RAID5F 00:07:14.449 #undef SPDK_CONFIG_RBD 00:07:14.449 #define SPDK_CONFIG_RDMA 1 00:07:14.449 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:14.449 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:14.449 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:14.449 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:14.449 #define SPDK_CONFIG_SHARED 1 00:07:14.449 #undef SPDK_CONFIG_SMA 00:07:14.449 #define SPDK_CONFIG_TESTS 1 00:07:14.449 #undef SPDK_CONFIG_TSAN 00:07:14.449 #define SPDK_CONFIG_UBLK 1 00:07:14.449 #define SPDK_CONFIG_UBSAN 1 00:07:14.449 #undef SPDK_CONFIG_UNIT_TESTS 00:07:14.449 #undef SPDK_CONFIG_URING 00:07:14.449 #define SPDK_CONFIG_URING_PATH 00:07:14.449 #undef SPDK_CONFIG_URING_ZNS 00:07:14.449 #undef SPDK_CONFIG_USDT 00:07:14.449 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:14.449 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:14.449 #define SPDK_CONFIG_VFIO_USER 1 00:07:14.449 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:14.449 #define SPDK_CONFIG_VHOST 1 00:07:14.449 #define SPDK_CONFIG_VIRTIO 1 00:07:14.449 #undef SPDK_CONFIG_VTUNE 00:07:14.449 #define SPDK_CONFIG_VTUNE_DIR 00:07:14.449 #define SPDK_CONFIG_WERROR 1 00:07:14.449 #define SPDK_CONFIG_WPDK_DIR 00:07:14.449 #undef SPDK_CONFIG_XNVME 00:07:14.449 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:14.449 20:38:38 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:14.449 20:38:38 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.449 20:38:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.449 20:38:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.449 20:38:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.449 20:38:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.449 20:38:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.450 20:38:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.450 20:38:38 -- paths/export.sh@5 -- # export PATH 00:07:14.450 20:38:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.450 20:38:38 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:14.450 20:38:38 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:14.450 20:38:38 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:14.450 20:38:38 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:14.450 20:38:38 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:14.450 20:38:38 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:14.450 20:38:38 -- pm/common@67 -- # TEST_TAG=N/A 00:07:14.450 20:38:38 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:14.450 20:38:38 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:14.450 20:38:38 -- pm/common@71 -- # uname -s 00:07:14.450 20:38:38 -- pm/common@71 -- # PM_OS=Linux 00:07:14.450 20:38:38 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:14.450 20:38:38 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:07:14.450 20:38:38 -- pm/common@76 -- # [[ Linux == Linux ]] 00:07:14.450 20:38:38 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:07:14.450 20:38:38 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:07:14.450 20:38:38 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:14.450 20:38:38 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:14.450 20:38:38 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:07:14.450 20:38:38 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:07:14.450 20:38:38 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:14.450 20:38:38 -- common/autotest_common.sh@57 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:07:14.450 20:38:38 -- common/autotest_common.sh@61 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:14.450 20:38:38 -- common/autotest_common.sh@63 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:07:14.450 20:38:38 -- common/autotest_common.sh@65 -- # : 1 00:07:14.450 20:38:38 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:14.450 20:38:38 -- common/autotest_common.sh@67 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:07:14.450 20:38:38 -- common/autotest_common.sh@69 -- # : 00:07:14.450 20:38:38 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:07:14.450 20:38:38 -- common/autotest_common.sh@71 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:07:14.450 20:38:38 -- common/autotest_common.sh@73 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:07:14.450 20:38:38 -- common/autotest_common.sh@75 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:07:14.450 20:38:38 -- common/autotest_common.sh@77 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:14.450 20:38:38 -- common/autotest_common.sh@79 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:07:14.450 20:38:38 -- common/autotest_common.sh@81 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:07:14.450 20:38:38 -- common/autotest_common.sh@83 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:07:14.450 20:38:38 -- common/autotest_common.sh@85 -- # : 1 00:07:14.450 20:38:38 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:07:14.450 20:38:38 -- common/autotest_common.sh@87 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:07:14.450 20:38:38 -- common/autotest_common.sh@89 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:07:14.450 20:38:38 -- common/autotest_common.sh@91 -- # : 1 00:07:14.450 20:38:38 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:07:14.450 20:38:38 -- common/autotest_common.sh@93 -- # : 1 00:07:14.450 20:38:38 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:07:14.450 20:38:38 -- common/autotest_common.sh@95 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:14.450 20:38:38 -- common/autotest_common.sh@97 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:07:14.450 20:38:38 -- common/autotest_common.sh@99 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:07:14.450 20:38:38 -- common/autotest_common.sh@101 -- # : tcp 00:07:14.450 20:38:38 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:14.450 20:38:38 -- common/autotest_common.sh@103 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:07:14.450 20:38:38 -- common/autotest_common.sh@105 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:07:14.450 20:38:38 -- common/autotest_common.sh@107 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:07:14.450 20:38:38 -- common/autotest_common.sh@109 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:07:14.450 20:38:38 -- common/autotest_common.sh@111 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:07:14.450 20:38:38 -- common/autotest_common.sh@113 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:07:14.450 20:38:38 -- common/autotest_common.sh@115 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:07:14.450 20:38:38 -- common/autotest_common.sh@117 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:14.450 20:38:38 -- common/autotest_common.sh@119 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:07:14.450 20:38:38 -- common/autotest_common.sh@121 -- # : 1 00:07:14.450 20:38:38 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:07:14.450 20:38:38 -- common/autotest_common.sh@123 -- # : 00:07:14.450 20:38:38 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:14.450 20:38:38 -- common/autotest_common.sh@125 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:07:14.450 20:38:38 -- common/autotest_common.sh@127 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:07:14.450 20:38:38 -- common/autotest_common.sh@129 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:07:14.450 20:38:38 -- common/autotest_common.sh@131 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:07:14.450 20:38:38 -- common/autotest_common.sh@133 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:07:14.450 20:38:38 -- common/autotest_common.sh@135 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:07:14.450 20:38:38 -- common/autotest_common.sh@137 -- # : 00:07:14.450 20:38:38 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:07:14.450 20:38:38 -- common/autotest_common.sh@139 -- # : true 00:07:14.450 20:38:38 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:07:14.450 20:38:38 -- common/autotest_common.sh@141 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:07:14.450 20:38:38 -- common/autotest_common.sh@143 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:07:14.450 20:38:38 -- common/autotest_common.sh@145 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:07:14.450 20:38:38 -- common/autotest_common.sh@147 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:07:14.450 20:38:38 -- common/autotest_common.sh@149 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:07:14.450 20:38:38 -- common/autotest_common.sh@151 -- # : 0 00:07:14.450 20:38:38 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:07:14.450 20:38:39 -- common/autotest_common.sh@153 -- # : e810 00:07:14.450 20:38:39 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:07:14.450 20:38:39 -- common/autotest_common.sh@155 -- # : 0 00:07:14.450 20:38:39 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:07:14.450 20:38:39 -- common/autotest_common.sh@157 -- # : 0 00:07:14.450 20:38:39 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:07:14.450 20:38:39 -- common/autotest_common.sh@159 -- # : 0 00:07:14.450 20:38:39 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:07:14.450 20:38:39 -- common/autotest_common.sh@161 -- # : 0 00:07:14.450 20:38:39 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:07:14.450 20:38:39 -- common/autotest_common.sh@163 -- # : 0 00:07:14.450 20:38:39 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:07:14.450 20:38:39 -- common/autotest_common.sh@166 -- # : 00:07:14.450 20:38:39 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:07:14.450 20:38:39 -- common/autotest_common.sh@168 -- # : 0 00:07:14.450 20:38:39 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:07:14.450 20:38:39 -- common/autotest_common.sh@170 -- # : 0 00:07:14.450 20:38:39 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:14.450 20:38:39 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:14.450 20:38:39 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:14.450 20:38:39 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:14.450 20:38:39 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:14.450 20:38:39 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:14.450 20:38:39 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:14.451 20:38:39 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:14.451 20:38:39 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:14.451 20:38:39 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:14.451 20:38:39 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:14.451 20:38:39 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:14.451 20:38:39 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:14.451 20:38:39 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:14.451 20:38:39 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:07:14.451 20:38:39 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:14.451 20:38:39 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:14.451 20:38:39 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:14.451 20:38:39 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:14.451 20:38:39 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:14.451 20:38:39 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:07:14.451 20:38:39 -- common/autotest_common.sh@199 -- # cat 00:07:14.451 20:38:39 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:07:14.451 20:38:39 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:14.451 20:38:39 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:14.451 20:38:39 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:14.451 20:38:39 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:14.451 20:38:39 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:07:14.451 20:38:39 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:07:14.451 20:38:39 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:14.451 20:38:39 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:14.451 20:38:39 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:14.451 20:38:39 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:14.451 20:38:39 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:14.451 20:38:39 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:14.451 20:38:39 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:14.451 20:38:39 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:14.451 20:38:39 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:14.451 20:38:39 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:14.451 20:38:39 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:14.451 20:38:39 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:14.451 20:38:39 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:07:14.451 20:38:39 -- common/autotest_common.sh@252 -- # export valgrind= 00:07:14.451 20:38:39 -- common/autotest_common.sh@252 -- # valgrind= 00:07:14.451 20:38:39 -- common/autotest_common.sh@258 -- # uname -s 00:07:14.451 20:38:39 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:07:14.451 20:38:39 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:07:14.451 20:38:39 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:07:14.451 20:38:39 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:07:14.451 20:38:39 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:14.451 20:38:39 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:07:14.451 20:38:39 -- common/autotest_common.sh@268 -- # MAKE=make 00:07:14.451 20:38:39 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j144 00:07:14.451 20:38:39 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:07:14.451 20:38:39 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:07:14.451 20:38:39 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:07:14.451 20:38:39 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:07:14.451 20:38:39 -- common/autotest_common.sh@289 -- # for i in "$@" 00:07:14.451 20:38:39 -- common/autotest_common.sh@290 -- # case "$i" in 00:07:14.451 20:38:39 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:07:14.451 20:38:39 -- common/autotest_common.sh@307 -- # [[ -z 2599991 ]] 00:07:14.451 20:38:39 -- common/autotest_common.sh@307 -- # kill -0 2599991 00:07:14.451 20:38:39 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:07:14.451 20:38:39 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:07:14.451 20:38:39 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:07:14.451 20:38:39 -- common/autotest_common.sh@320 -- # local mount target_dir 00:07:14.451 20:38:39 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:07:14.451 20:38:39 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:07:14.451 20:38:39 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:07:14.451 20:38:39 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:07:14.451 20:38:39 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.ClBgW0 00:07:14.451 20:38:39 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:14.451 20:38:39 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:07:14.451 20:38:39 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:14.451 20:38:39 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ClBgW0/tests/target /tmp/spdk.ClBgW0 00:07:14.451 20:38:39 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:07:14.451 20:38:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.451 20:38:39 -- common/autotest_common.sh@316 -- # df -T 00:07:14.451 20:38:39 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:07:14.451 20:38:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:07:14.451 20:38:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:07:14.451 20:38:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:07:14.451 20:38:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=118980972544 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=129370972160 00:07:14.451 20:38:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=10389999616 00:07:14.451 20:38:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=64682872832 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685486080 00:07:14.451 20:38:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:07:14.451 20:38:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=25864474624 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=25874194432 00:07:14.451 20:38:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=9719808 00:07:14.451 20:38:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=efivarfs 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=efivarfs 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=391168 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=507904 00:07:14.451 20:38:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=112640 00:07:14.451 20:38:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:14.451 20:38:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=64684818432 00:07:14.451 20:38:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=64685486080 00:07:14.452 20:38:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=667648 00:07:14.452 20:38:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.452 20:38:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:14.452 20:38:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:14.452 20:38:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=12937093120 00:07:14.452 20:38:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12937097216 00:07:14.452 20:38:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:14.452 20:38:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.452 20:38:39 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:07:14.452 20:38:39 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:07:14.452 20:38:39 -- common/autotest_common.sh@351 -- # avails["$mount"]=12937093120 00:07:14.452 20:38:39 -- common/autotest_common.sh@351 -- # sizes["$mount"]=12937097216 00:07:14.452 20:38:39 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:07:14.452 20:38:39 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:07:14.452 20:38:39 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:07:14.452 * Looking for test storage... 00:07:14.452 20:38:39 -- common/autotest_common.sh@357 -- # local target_space new_size 00:07:14.452 20:38:39 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:07:14.452 20:38:39 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.452 20:38:39 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:14.452 20:38:39 -- common/autotest_common.sh@361 -- # mount=/ 00:07:14.452 20:38:39 -- common/autotest_common.sh@363 -- # target_space=118980972544 00:07:14.452 20:38:39 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:07:14.452 20:38:39 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:07:14.452 20:38:39 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:07:14.452 20:38:39 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:07:14.452 20:38:39 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:07:14.452 20:38:39 -- common/autotest_common.sh@370 -- # new_size=12604592128 00:07:14.452 20:38:39 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:14.452 20:38:39 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.452 20:38:39 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.452 20:38:39 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.452 20:38:39 -- common/autotest_common.sh@378 -- # return 0 00:07:14.452 20:38:39 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:07:14.452 20:38:39 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:07:14.452 20:38:39 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:14.452 20:38:39 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:14.452 20:38:39 -- common/autotest_common.sh@1673 -- # true 00:07:14.452 20:38:39 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:07:14.452 20:38:39 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:14.452 20:38:39 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:14.452 20:38:39 -- common/autotest_common.sh@27 -- # exec 00:07:14.452 20:38:39 -- common/autotest_common.sh@29 -- # exec 00:07:14.452 20:38:39 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:14.452 20:38:39 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:14.452 20:38:39 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:14.452 20:38:39 -- common/autotest_common.sh@18 -- # set -x 00:07:14.452 20:38:39 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.452 20:38:39 -- nvmf/common.sh@7 -- # uname -s 00:07:14.714 20:38:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.714 20:38:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.714 20:38:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.714 20:38:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.714 20:38:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.714 20:38:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.714 20:38:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.714 20:38:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.714 20:38:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.714 20:38:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.714 20:38:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:07:14.714 20:38:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:07:14.714 20:38:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.714 20:38:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.714 20:38:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.714 20:38:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.714 20:38:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.714 20:38:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.714 20:38:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.714 20:38:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.714 20:38:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.714 20:38:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.714 20:38:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.714 20:38:39 -- paths/export.sh@5 -- # export PATH 00:07:14.714 20:38:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.714 20:38:39 -- nvmf/common.sh@47 -- # : 0 00:07:14.714 20:38:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.714 20:38:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.714 20:38:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.714 20:38:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.714 20:38:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.714 20:38:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.714 20:38:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.714 20:38:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.714 20:38:39 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:14.714 20:38:39 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:14.714 20:38:39 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:14.714 20:38:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:14.714 20:38:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.714 20:38:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:14.714 20:38:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:14.714 20:38:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:14.714 20:38:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.714 20:38:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:14.714 20:38:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.714 20:38:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:14.714 20:38:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:14.714 20:38:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:14.714 20:38:39 -- common/autotest_common.sh@10 -- # set +x 00:07:22.879 20:38:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:22.879 20:38:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.879 20:38:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.879 20:38:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.879 20:38:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.879 20:38:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.879 20:38:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.879 20:38:46 -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.879 20:38:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.879 20:38:46 -- nvmf/common.sh@296 -- # e810=() 00:07:22.879 20:38:46 -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.879 20:38:46 -- nvmf/common.sh@297 -- # x722=() 00:07:22.879 20:38:46 -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.879 20:38:46 -- nvmf/common.sh@298 -- # mlx=() 00:07:22.879 20:38:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.879 20:38:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.879 20:38:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.879 20:38:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.879 20:38:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.879 20:38:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.879 20:38:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.879 20:38:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.879 20:38:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.879 20:38:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.879 20:38:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.879 20:38:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.879 20:38:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.879 20:38:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:22.879 20:38:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.879 20:38:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.879 20:38:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:22.879 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:22.879 20:38:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.879 20:38:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:22.879 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:22.879 20:38:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.879 20:38:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.879 20:38:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.879 20:38:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:22.879 20:38:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.879 20:38:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:22.879 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:22.879 20:38:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.879 20:38:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.879 20:38:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.879 20:38:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:22.879 20:38:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.879 20:38:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:22.879 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:22.879 20:38:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.879 20:38:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:22.879 20:38:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:22.879 20:38:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:22.879 20:38:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:22.879 20:38:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.879 20:38:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.879 20:38:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.879 20:38:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:22.879 20:38:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.880 20:38:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.880 20:38:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:22.880 20:38:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.880 20:38:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.880 20:38:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:22.880 20:38:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:22.880 20:38:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.880 20:38:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.880 20:38:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.880 20:38:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.880 20:38:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:22.880 20:38:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.880 20:38:46 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.880 20:38:46 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.880 20:38:46 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:22.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:07:22.880 00:07:22.880 --- 10.0.0.2 ping statistics --- 00:07:22.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.880 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:07:22.880 20:38:46 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:07:22.880 00:07:22.880 --- 10.0.0.1 ping statistics --- 00:07:22.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.880 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:07:22.880 20:38:46 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.880 20:38:46 -- nvmf/common.sh@411 -- # return 0 00:07:22.880 20:38:46 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:22.880 20:38:46 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.880 20:38:46 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:22.880 20:38:46 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:22.880 20:38:46 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.880 20:38:46 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:22.880 20:38:46 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:22.880 20:38:46 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:22.880 20:38:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:22.880 20:38:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:22.880 20:38:46 -- common/autotest_common.sh@10 -- # set +x 00:07:22.880 ************************************ 00:07:22.880 START TEST nvmf_filesystem_no_in_capsule 00:07:22.880 ************************************ 00:07:22.880 20:38:46 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:07:22.880 20:38:46 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:22.880 20:38:46 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:22.880 20:38:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:22.880 20:38:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:22.880 20:38:46 -- common/autotest_common.sh@10 -- # set +x 00:07:22.880 20:38:46 -- nvmf/common.sh@470 -- # nvmfpid=2603857 00:07:22.880 20:38:46 -- nvmf/common.sh@471 -- # waitforlisten 2603857 00:07:22.880 20:38:46 -- common/autotest_common.sh@817 -- # '[' -z 2603857 ']' 00:07:22.880 20:38:46 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:22.880 20:38:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.880 20:38:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:22.880 20:38:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.880 20:38:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:22.880 20:38:46 -- common/autotest_common.sh@10 -- # set +x 00:07:22.880 [2024-04-24 20:38:46.801412] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:07:22.880 [2024-04-24 20:38:46.801490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.880 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.880 [2024-04-24 20:38:46.889807] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.880 [2024-04-24 20:38:46.984565] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:22.880 [2024-04-24 20:38:46.984624] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:22.880 [2024-04-24 20:38:46.984633] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:22.880 [2024-04-24 20:38:46.984639] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:22.880 [2024-04-24 20:38:46.984645] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:22.880 [2024-04-24 20:38:46.984799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.880 [2024-04-24 20:38:46.984977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.880 [2024-04-24 20:38:46.985140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.880 [2024-04-24 20:38:46.985140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.141 20:38:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:23.141 20:38:47 -- common/autotest_common.sh@850 -- # return 0 00:07:23.141 20:38:47 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:23.141 20:38:47 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:23.141 20:38:47 -- common/autotest_common.sh@10 -- # set +x 00:07:23.141 20:38:47 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.141 20:38:47 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:23.141 20:38:47 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:23.141 20:38:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.141 20:38:47 -- common/autotest_common.sh@10 -- # set +x 00:07:23.141 [2024-04-24 20:38:47.720518] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.141 20:38:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.141 20:38:47 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:23.141 20:38:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.141 20:38:47 -- common/autotest_common.sh@10 -- # set +x 00:07:23.402 Malloc1 00:07:23.402 20:38:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.402 20:38:47 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:23.402 20:38:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.402 20:38:47 -- common/autotest_common.sh@10 -- # set +x 00:07:23.402 20:38:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.402 20:38:47 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:23.402 20:38:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.402 20:38:47 -- common/autotest_common.sh@10 -- # set +x 00:07:23.402 20:38:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.402 20:38:47 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:23.402 20:38:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.402 20:38:47 -- common/autotest_common.sh@10 -- # set +x 00:07:23.402 [2024-04-24 20:38:47.849032] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.402 20:38:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.402 20:38:47 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:23.402 20:38:47 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:23.402 20:38:47 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:23.402 20:38:47 -- common/autotest_common.sh@1366 -- # local bs 00:07:23.402 20:38:47 -- common/autotest_common.sh@1367 -- # local nb 00:07:23.402 20:38:47 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:23.402 20:38:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:23.402 20:38:47 -- common/autotest_common.sh@10 -- # set +x 00:07:23.402 20:38:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:23.402 20:38:47 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:23.402 { 00:07:23.402 "name": "Malloc1", 00:07:23.402 "aliases": [ 00:07:23.402 "90971a5a-63b5-4424-af63-f83e2f28638a" 00:07:23.402 ], 00:07:23.402 "product_name": "Malloc disk", 00:07:23.402 "block_size": 512, 00:07:23.402 "num_blocks": 1048576, 00:07:23.402 "uuid": "90971a5a-63b5-4424-af63-f83e2f28638a", 00:07:23.402 "assigned_rate_limits": { 00:07:23.402 "rw_ios_per_sec": 0, 00:07:23.402 "rw_mbytes_per_sec": 0, 00:07:23.402 "r_mbytes_per_sec": 0, 00:07:23.402 "w_mbytes_per_sec": 0 00:07:23.402 }, 00:07:23.402 "claimed": true, 00:07:23.402 "claim_type": "exclusive_write", 00:07:23.402 "zoned": false, 00:07:23.402 "supported_io_types": { 00:07:23.402 "read": true, 00:07:23.402 "write": true, 00:07:23.402 "unmap": true, 00:07:23.402 "write_zeroes": true, 00:07:23.402 "flush": true, 00:07:23.402 "reset": true, 00:07:23.402 "compare": false, 00:07:23.402 "compare_and_write": false, 00:07:23.402 "abort": true, 00:07:23.402 "nvme_admin": false, 00:07:23.402 "nvme_io": false 00:07:23.402 }, 00:07:23.402 "memory_domains": [ 00:07:23.402 { 00:07:23.402 "dma_device_id": "system", 00:07:23.402 "dma_device_type": 1 00:07:23.402 }, 00:07:23.402 { 00:07:23.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.402 "dma_device_type": 2 00:07:23.402 } 00:07:23.402 ], 00:07:23.402 "driver_specific": {} 00:07:23.402 } 00:07:23.402 ]' 00:07:23.402 20:38:47 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:23.402 20:38:47 -- common/autotest_common.sh@1369 -- # bs=512 00:07:23.402 20:38:47 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:23.402 20:38:47 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:23.402 20:38:47 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:23.402 20:38:47 -- common/autotest_common.sh@1374 -- # echo 512 00:07:23.402 20:38:47 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:23.402 20:38:47 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:25.317 20:38:49 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:25.317 20:38:49 -- common/autotest_common.sh@1184 -- # local i=0 00:07:25.317 20:38:49 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:25.317 20:38:49 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:25.317 20:38:49 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:27.233 20:38:51 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:27.233 20:38:51 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:27.233 20:38:51 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:27.233 20:38:51 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:27.233 20:38:51 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:27.233 20:38:51 -- common/autotest_common.sh@1194 -- # return 0 00:07:27.233 20:38:51 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:27.233 20:38:51 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:27.233 20:38:51 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:27.233 20:38:51 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:27.233 20:38:51 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:27.233 20:38:51 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:27.233 20:38:51 -- setup/common.sh@80 -- # echo 536870912 00:07:27.233 20:38:51 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:27.233 20:38:51 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:27.233 20:38:51 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:27.233 20:38:51 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:27.233 20:38:51 -- target/filesystem.sh@69 -- # partprobe 00:07:27.233 20:38:51 -- target/filesystem.sh@70 -- # sleep 1 00:07:28.174 20:38:52 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:28.174 20:38:52 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:28.174 20:38:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:28.174 20:38:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.174 20:38:52 -- common/autotest_common.sh@10 -- # set +x 00:07:28.435 ************************************ 00:07:28.435 START TEST filesystem_ext4 00:07:28.435 ************************************ 00:07:28.435 20:38:52 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:28.435 20:38:52 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:28.435 20:38:52 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:28.435 20:38:52 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:28.435 20:38:52 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:28.435 20:38:52 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:28.435 20:38:52 -- common/autotest_common.sh@914 -- # local i=0 00:07:28.435 20:38:52 -- common/autotest_common.sh@915 -- # local force 00:07:28.435 20:38:52 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:28.435 20:38:52 -- common/autotest_common.sh@918 -- # force=-F 00:07:28.435 20:38:52 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:28.435 mke2fs 1.46.5 (30-Dec-2021) 00:07:28.435 Discarding device blocks: 0/522240 done 00:07:28.435 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:28.435 Filesystem UUID: 97a1b524-d9a2-46bf-8550-a3c12e61b3bc 00:07:28.435 Superblock backups stored on blocks: 00:07:28.435 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:28.435 00:07:28.435 Allocating group tables: 0/64 done 00:07:28.435 Writing inode tables: 0/64 done 00:07:28.702 Creating journal (8192 blocks): done 00:07:28.702 Writing superblocks and filesystem accounting information: 0/64 done 00:07:28.702 00:07:28.702 20:38:53 -- common/autotest_common.sh@931 -- # return 0 00:07:28.702 20:38:53 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:28.963 20:38:53 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:28.963 20:38:53 -- target/filesystem.sh@25 -- # sync 00:07:28.963 20:38:53 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:28.963 20:38:53 -- target/filesystem.sh@27 -- # sync 00:07:28.963 20:38:53 -- target/filesystem.sh@29 -- # i=0 00:07:28.963 20:38:53 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:28.963 20:38:53 -- target/filesystem.sh@37 -- # kill -0 2603857 00:07:28.963 20:38:53 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:28.963 20:38:53 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:28.963 20:38:53 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:28.963 20:38:53 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:28.963 00:07:28.963 real 0m0.577s 00:07:28.963 user 0m0.025s 00:07:28.963 sys 0m0.071s 00:07:28.963 20:38:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.963 20:38:53 -- common/autotest_common.sh@10 -- # set +x 00:07:28.963 ************************************ 00:07:28.963 END TEST filesystem_ext4 00:07:28.963 ************************************ 00:07:28.963 20:38:53 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:28.963 20:38:53 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:28.963 20:38:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.963 20:38:53 -- common/autotest_common.sh@10 -- # set +x 00:07:29.224 ************************************ 00:07:29.224 START TEST filesystem_btrfs 00:07:29.224 ************************************ 00:07:29.224 20:38:53 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:29.224 20:38:53 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:29.224 20:38:53 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:29.224 20:38:53 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:29.224 20:38:53 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:29.224 20:38:53 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:29.224 20:38:53 -- common/autotest_common.sh@914 -- # local i=0 00:07:29.224 20:38:53 -- common/autotest_common.sh@915 -- # local force 00:07:29.224 20:38:53 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:29.224 20:38:53 -- common/autotest_common.sh@920 -- # force=-f 00:07:29.224 20:38:53 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:29.485 btrfs-progs v6.6.2 00:07:29.485 See https://btrfs.readthedocs.io for more information. 00:07:29.485 00:07:29.485 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:29.485 NOTE: several default settings have changed in version 5.15, please make sure 00:07:29.485 this does not affect your deployments: 00:07:29.485 - DUP for metadata (-m dup) 00:07:29.485 - enabled no-holes (-O no-holes) 00:07:29.485 - enabled free-space-tree (-R free-space-tree) 00:07:29.485 00:07:29.485 Label: (null) 00:07:29.485 UUID: 89e89534-b1ed-45d2-9f53-4496358aef63 00:07:29.485 Node size: 16384 00:07:29.485 Sector size: 4096 00:07:29.485 Filesystem size: 510.00MiB 00:07:29.485 Block group profiles: 00:07:29.485 Data: single 8.00MiB 00:07:29.485 Metadata: DUP 32.00MiB 00:07:29.485 System: DUP 8.00MiB 00:07:29.485 SSD detected: yes 00:07:29.485 Zoned device: no 00:07:29.485 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:29.485 Runtime features: free-space-tree 00:07:29.485 Checksum: crc32c 00:07:29.485 Number of devices: 1 00:07:29.485 Devices: 00:07:29.485 ID SIZE PATH 00:07:29.485 1 510.00MiB /dev/nvme0n1p1 00:07:29.485 00:07:29.485 20:38:54 -- common/autotest_common.sh@931 -- # return 0 00:07:29.485 20:38:54 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.428 20:38:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.428 20:38:54 -- target/filesystem.sh@25 -- # sync 00:07:30.428 20:38:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.428 20:38:54 -- target/filesystem.sh@27 -- # sync 00:07:30.428 20:38:55 -- target/filesystem.sh@29 -- # i=0 00:07:30.428 20:38:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.428 20:38:55 -- target/filesystem.sh@37 -- # kill -0 2603857 00:07:30.428 20:38:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.428 20:38:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.428 20:38:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.428 20:38:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.428 00:07:30.428 real 0m1.355s 00:07:30.428 user 0m0.021s 00:07:30.428 sys 0m0.142s 00:07:30.428 20:38:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.428 20:38:55 -- common/autotest_common.sh@10 -- # set +x 00:07:30.428 ************************************ 00:07:30.428 END TEST filesystem_btrfs 00:07:30.428 ************************************ 00:07:30.689 20:38:55 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:30.689 20:38:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:30.689 20:38:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.689 20:38:55 -- common/autotest_common.sh@10 -- # set +x 00:07:30.689 ************************************ 00:07:30.689 START TEST filesystem_xfs 00:07:30.689 ************************************ 00:07:30.689 20:38:55 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:30.689 20:38:55 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:30.689 20:38:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.689 20:38:55 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:30.689 20:38:55 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:30.689 20:38:55 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:30.689 20:38:55 -- common/autotest_common.sh@914 -- # local i=0 00:07:30.689 20:38:55 -- common/autotest_common.sh@915 -- # local force 00:07:30.689 20:38:55 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:30.689 20:38:55 -- common/autotest_common.sh@920 -- # force=-f 00:07:30.689 20:38:55 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:30.689 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:30.689 = sectsz=512 attr=2, projid32bit=1 00:07:30.689 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:30.689 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:30.689 data = bsize=4096 blocks=130560, imaxpct=25 00:07:30.689 = sunit=0 swidth=0 blks 00:07:30.689 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:30.689 log =internal log bsize=4096 blocks=16384, version=2 00:07:30.689 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:30.689 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:31.633 Discarding blocks...Done. 00:07:31.633 20:38:56 -- common/autotest_common.sh@931 -- # return 0 00:07:31.633 20:38:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:34.178 20:38:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:34.178 20:38:58 -- target/filesystem.sh@25 -- # sync 00:07:34.178 20:38:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:34.178 20:38:58 -- target/filesystem.sh@27 -- # sync 00:07:34.178 20:38:58 -- target/filesystem.sh@29 -- # i=0 00:07:34.178 20:38:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:34.178 20:38:58 -- target/filesystem.sh@37 -- # kill -0 2603857 00:07:34.178 20:38:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:34.178 20:38:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:34.178 20:38:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:34.178 20:38:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:34.178 00:07:34.178 real 0m3.129s 00:07:34.178 user 0m0.034s 00:07:34.178 sys 0m0.071s 00:07:34.178 20:38:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.178 20:38:58 -- common/autotest_common.sh@10 -- # set +x 00:07:34.178 ************************************ 00:07:34.178 END TEST filesystem_xfs 00:07:34.178 ************************************ 00:07:34.178 20:38:58 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:34.178 20:38:58 -- target/filesystem.sh@93 -- # sync 00:07:34.178 20:38:58 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:34.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:34.178 20:38:58 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:34.178 20:38:58 -- common/autotest_common.sh@1205 -- # local i=0 00:07:34.178 20:38:58 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:34.178 20:38:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.178 20:38:58 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:34.178 20:38:58 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:34.178 20:38:58 -- common/autotest_common.sh@1217 -- # return 0 00:07:34.178 20:38:58 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:34.178 20:38:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:34.178 20:38:58 -- common/autotest_common.sh@10 -- # set +x 00:07:34.178 20:38:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:34.178 20:38:58 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:34.178 20:38:58 -- target/filesystem.sh@101 -- # killprocess 2603857 00:07:34.178 20:38:58 -- common/autotest_common.sh@936 -- # '[' -z 2603857 ']' 00:07:34.178 20:38:58 -- common/autotest_common.sh@940 -- # kill -0 2603857 00:07:34.178 20:38:58 -- common/autotest_common.sh@941 -- # uname 00:07:34.178 20:38:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:34.178 20:38:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2603857 00:07:34.178 20:38:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:34.178 20:38:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:34.178 20:38:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2603857' 00:07:34.178 killing process with pid 2603857 00:07:34.178 20:38:58 -- common/autotest_common.sh@955 -- # kill 2603857 00:07:34.178 20:38:58 -- common/autotest_common.sh@960 -- # wait 2603857 00:07:34.438 20:38:58 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:34.438 00:07:34.438 real 0m12.115s 00:07:34.438 user 0m47.706s 00:07:34.438 sys 0m1.436s 00:07:34.438 20:38:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.438 20:38:58 -- common/autotest_common.sh@10 -- # set +x 00:07:34.438 ************************************ 00:07:34.438 END TEST nvmf_filesystem_no_in_capsule 00:07:34.438 ************************************ 00:07:34.438 20:38:58 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:34.438 20:38:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:34.438 20:38:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.438 20:38:58 -- common/autotest_common.sh@10 -- # set +x 00:07:34.438 ************************************ 00:07:34.438 START TEST nvmf_filesystem_in_capsule 00:07:34.438 ************************************ 00:07:34.439 20:38:59 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:07:34.439 20:38:59 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:34.439 20:38:59 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:34.439 20:38:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:34.439 20:38:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:34.439 20:38:59 -- common/autotest_common.sh@10 -- # set +x 00:07:34.439 20:38:59 -- nvmf/common.sh@470 -- # nvmfpid=2606544 00:07:34.439 20:38:59 -- nvmf/common.sh@471 -- # waitforlisten 2606544 00:07:34.439 20:38:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:34.439 20:38:59 -- common/autotest_common.sh@817 -- # '[' -z 2606544 ']' 00:07:34.439 20:38:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.439 20:38:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:34.439 20:38:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.439 20:38:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:34.439 20:38:59 -- common/autotest_common.sh@10 -- # set +x 00:07:34.699 [2024-04-24 20:38:59.102535] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:07:34.699 [2024-04-24 20:38:59.102584] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.699 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.699 [2024-04-24 20:38:59.187759] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:34.699 [2024-04-24 20:38:59.251804] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.699 [2024-04-24 20:38:59.251844] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.699 [2024-04-24 20:38:59.251852] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.699 [2024-04-24 20:38:59.251860] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.699 [2024-04-24 20:38:59.251867] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.699 [2024-04-24 20:38:59.252027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.699 [2024-04-24 20:38:59.252149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.699 [2024-04-24 20:38:59.252309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.699 [2024-04-24 20:38:59.252310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.640 20:38:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:35.640 20:38:59 -- common/autotest_common.sh@850 -- # return 0 00:07:35.640 20:38:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:35.640 20:38:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:35.640 20:38:59 -- common/autotest_common.sh@10 -- # set +x 00:07:35.640 20:38:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.640 20:38:59 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:35.640 20:38:59 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:35.640 20:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.640 20:38:59 -- common/autotest_common.sh@10 -- # set +x 00:07:35.640 [2024-04-24 20:38:59.969432] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.640 20:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.640 20:38:59 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:35.640 20:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.640 20:38:59 -- common/autotest_common.sh@10 -- # set +x 00:07:35.640 Malloc1 00:07:35.640 20:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.640 20:39:00 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:35.640 20:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.640 20:39:00 -- common/autotest_common.sh@10 -- # set +x 00:07:35.640 20:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.640 20:39:00 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:35.640 20:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.640 20:39:00 -- common/autotest_common.sh@10 -- # set +x 00:07:35.640 20:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.640 20:39:00 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:35.640 20:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.640 20:39:00 -- common/autotest_common.sh@10 -- # set +x 00:07:35.640 [2024-04-24 20:39:00.091065] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:35.640 20:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.640 20:39:00 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:35.640 20:39:00 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:07:35.640 20:39:00 -- common/autotest_common.sh@1365 -- # local bdev_info 00:07:35.640 20:39:00 -- common/autotest_common.sh@1366 -- # local bs 00:07:35.640 20:39:00 -- common/autotest_common.sh@1367 -- # local nb 00:07:35.640 20:39:00 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:35.640 20:39:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:35.640 20:39:00 -- common/autotest_common.sh@10 -- # set +x 00:07:35.640 20:39:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:35.640 20:39:00 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:07:35.640 { 00:07:35.640 "name": "Malloc1", 00:07:35.640 "aliases": [ 00:07:35.640 "50c9e2a7-6c5c-4dde-a1c4-d22e4306d4c5" 00:07:35.640 ], 00:07:35.640 "product_name": "Malloc disk", 00:07:35.640 "block_size": 512, 00:07:35.640 "num_blocks": 1048576, 00:07:35.640 "uuid": "50c9e2a7-6c5c-4dde-a1c4-d22e4306d4c5", 00:07:35.640 "assigned_rate_limits": { 00:07:35.640 "rw_ios_per_sec": 0, 00:07:35.641 "rw_mbytes_per_sec": 0, 00:07:35.641 "r_mbytes_per_sec": 0, 00:07:35.641 "w_mbytes_per_sec": 0 00:07:35.641 }, 00:07:35.641 "claimed": true, 00:07:35.641 "claim_type": "exclusive_write", 00:07:35.641 "zoned": false, 00:07:35.641 "supported_io_types": { 00:07:35.641 "read": true, 00:07:35.641 "write": true, 00:07:35.641 "unmap": true, 00:07:35.641 "write_zeroes": true, 00:07:35.641 "flush": true, 00:07:35.641 "reset": true, 00:07:35.641 "compare": false, 00:07:35.641 "compare_and_write": false, 00:07:35.641 "abort": true, 00:07:35.641 "nvme_admin": false, 00:07:35.641 "nvme_io": false 00:07:35.641 }, 00:07:35.641 "memory_domains": [ 00:07:35.641 { 00:07:35.641 "dma_device_id": "system", 00:07:35.641 "dma_device_type": 1 00:07:35.641 }, 00:07:35.641 { 00:07:35.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.641 "dma_device_type": 2 00:07:35.641 } 00:07:35.641 ], 00:07:35.641 "driver_specific": {} 00:07:35.641 } 00:07:35.641 ]' 00:07:35.641 20:39:00 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:07:35.641 20:39:00 -- common/autotest_common.sh@1369 -- # bs=512 00:07:35.641 20:39:00 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:07:35.641 20:39:00 -- common/autotest_common.sh@1370 -- # nb=1048576 00:07:35.641 20:39:00 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:07:35.641 20:39:00 -- common/autotest_common.sh@1374 -- # echo 512 00:07:35.641 20:39:00 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:35.641 20:39:00 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:37.061 20:39:01 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:37.061 20:39:01 -- common/autotest_common.sh@1184 -- # local i=0 00:07:37.061 20:39:01 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:07:37.061 20:39:01 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:07:37.061 20:39:01 -- common/autotest_common.sh@1191 -- # sleep 2 00:07:39.610 20:39:03 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:07:39.610 20:39:03 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:07:39.610 20:39:03 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:07:39.610 20:39:03 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:07:39.610 20:39:03 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:07:39.610 20:39:03 -- common/autotest_common.sh@1194 -- # return 0 00:07:39.610 20:39:03 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:39.610 20:39:03 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:39.610 20:39:03 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:39.610 20:39:03 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:39.610 20:39:03 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:39.610 20:39:03 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:39.610 20:39:03 -- setup/common.sh@80 -- # echo 536870912 00:07:39.610 20:39:03 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:39.610 20:39:03 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:39.610 20:39:03 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:39.610 20:39:03 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:39.610 20:39:04 -- target/filesystem.sh@69 -- # partprobe 00:07:39.610 20:39:04 -- target/filesystem.sh@70 -- # sleep 1 00:07:40.552 20:39:05 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:40.552 20:39:05 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:40.552 20:39:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:40.552 20:39:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.552 20:39:05 -- common/autotest_common.sh@10 -- # set +x 00:07:40.812 ************************************ 00:07:40.812 START TEST filesystem_in_capsule_ext4 00:07:40.812 ************************************ 00:07:40.812 20:39:05 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:40.812 20:39:05 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:40.812 20:39:05 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:40.812 20:39:05 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:40.813 20:39:05 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:07:40.813 20:39:05 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:40.813 20:39:05 -- common/autotest_common.sh@914 -- # local i=0 00:07:40.813 20:39:05 -- common/autotest_common.sh@915 -- # local force 00:07:40.813 20:39:05 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:07:40.813 20:39:05 -- common/autotest_common.sh@918 -- # force=-F 00:07:40.813 20:39:05 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:40.813 mke2fs 1.46.5 (30-Dec-2021) 00:07:40.813 Discarding device blocks: 0/522240 done 00:07:40.813 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:40.813 Filesystem UUID: 20be5673-a8f6-46c9-bf6c-9cd840deae6c 00:07:40.813 Superblock backups stored on blocks: 00:07:40.813 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:40.813 00:07:40.813 Allocating group tables: 0/64 done 00:07:40.813 Writing inode tables: 0/64 done 00:07:41.754 Creating journal (8192 blocks): done 00:07:42.583 Writing superblocks and filesystem accounting information: 0/64 done 00:07:42.583 00:07:42.583 20:39:07 -- common/autotest_common.sh@931 -- # return 0 00:07:42.583 20:39:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.843 20:39:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.843 20:39:07 -- target/filesystem.sh@25 -- # sync 00:07:42.843 20:39:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.843 20:39:07 -- target/filesystem.sh@27 -- # sync 00:07:42.843 20:39:07 -- target/filesystem.sh@29 -- # i=0 00:07:42.843 20:39:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.843 20:39:07 -- target/filesystem.sh@37 -- # kill -0 2606544 00:07:42.843 20:39:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.843 20:39:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.843 20:39:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.843 20:39:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.843 00:07:42.843 real 0m1.995s 00:07:42.843 user 0m0.032s 00:07:42.843 sys 0m0.064s 00:07:42.843 20:39:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:42.843 20:39:07 -- common/autotest_common.sh@10 -- # set +x 00:07:42.843 ************************************ 00:07:42.843 END TEST filesystem_in_capsule_ext4 00:07:42.843 ************************************ 00:07:42.843 20:39:07 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:42.843 20:39:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:42.843 20:39:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.843 20:39:07 -- common/autotest_common.sh@10 -- # set +x 00:07:43.103 ************************************ 00:07:43.103 START TEST filesystem_in_capsule_btrfs 00:07:43.103 ************************************ 00:07:43.103 20:39:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:43.103 20:39:07 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:43.103 20:39:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.103 20:39:07 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:43.103 20:39:07 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:07:43.103 20:39:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:43.103 20:39:07 -- common/autotest_common.sh@914 -- # local i=0 00:07:43.103 20:39:07 -- common/autotest_common.sh@915 -- # local force 00:07:43.103 20:39:07 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:07:43.103 20:39:07 -- common/autotest_common.sh@920 -- # force=-f 00:07:43.103 20:39:07 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:43.362 btrfs-progs v6.6.2 00:07:43.362 See https://btrfs.readthedocs.io for more information. 00:07:43.362 00:07:43.362 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:43.362 NOTE: several default settings have changed in version 5.15, please make sure 00:07:43.362 this does not affect your deployments: 00:07:43.362 - DUP for metadata (-m dup) 00:07:43.362 - enabled no-holes (-O no-holes) 00:07:43.362 - enabled free-space-tree (-R free-space-tree) 00:07:43.362 00:07:43.362 Label: (null) 00:07:43.362 UUID: d8e17c9e-a4de-4489-b4d1-1a83e54e8112 00:07:43.362 Node size: 16384 00:07:43.362 Sector size: 4096 00:07:43.362 Filesystem size: 510.00MiB 00:07:43.362 Block group profiles: 00:07:43.362 Data: single 8.00MiB 00:07:43.362 Metadata: DUP 32.00MiB 00:07:43.362 System: DUP 8.00MiB 00:07:43.362 SSD detected: yes 00:07:43.362 Zoned device: no 00:07:43.362 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:43.362 Runtime features: free-space-tree 00:07:43.362 Checksum: crc32c 00:07:43.362 Number of devices: 1 00:07:43.362 Devices: 00:07:43.362 ID SIZE PATH 00:07:43.362 1 510.00MiB /dev/nvme0n1p1 00:07:43.362 00:07:43.362 20:39:07 -- common/autotest_common.sh@931 -- # return 0 00:07:43.362 20:39:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.622 20:39:08 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.622 20:39:08 -- target/filesystem.sh@25 -- # sync 00:07:43.881 20:39:08 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.881 20:39:08 -- target/filesystem.sh@27 -- # sync 00:07:43.881 20:39:08 -- target/filesystem.sh@29 -- # i=0 00:07:43.881 20:39:08 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.881 20:39:08 -- target/filesystem.sh@37 -- # kill -0 2606544 00:07:43.881 20:39:08 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.881 20:39:08 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.881 20:39:08 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.881 20:39:08 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.881 00:07:43.881 real 0m0.804s 00:07:43.881 user 0m0.030s 00:07:43.881 sys 0m0.132s 00:07:43.882 20:39:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.882 20:39:08 -- common/autotest_common.sh@10 -- # set +x 00:07:43.882 ************************************ 00:07:43.882 END TEST filesystem_in_capsule_btrfs 00:07:43.882 ************************************ 00:07:43.882 20:39:08 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:43.882 20:39:08 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:43.882 20:39:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.882 20:39:08 -- common/autotest_common.sh@10 -- # set +x 00:07:44.141 ************************************ 00:07:44.141 START TEST filesystem_in_capsule_xfs 00:07:44.141 ************************************ 00:07:44.141 20:39:08 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:07:44.141 20:39:08 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:44.141 20:39:08 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:44.141 20:39:08 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:44.141 20:39:08 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:07:44.141 20:39:08 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:07:44.141 20:39:08 -- common/autotest_common.sh@914 -- # local i=0 00:07:44.141 20:39:08 -- common/autotest_common.sh@915 -- # local force 00:07:44.141 20:39:08 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:07:44.141 20:39:08 -- common/autotest_common.sh@920 -- # force=-f 00:07:44.141 20:39:08 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:44.141 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:44.141 = sectsz=512 attr=2, projid32bit=1 00:07:44.141 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:44.141 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:44.141 data = bsize=4096 blocks=130560, imaxpct=25 00:07:44.141 = sunit=0 swidth=0 blks 00:07:44.141 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:44.141 log =internal log bsize=4096 blocks=16384, version=2 00:07:44.141 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:44.141 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:45.083 Discarding blocks...Done. 00:07:45.083 20:39:09 -- common/autotest_common.sh@931 -- # return 0 00:07:45.083 20:39:09 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:46.995 20:39:11 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:46.995 20:39:11 -- target/filesystem.sh@25 -- # sync 00:07:46.995 20:39:11 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:46.995 20:39:11 -- target/filesystem.sh@27 -- # sync 00:07:46.995 20:39:11 -- target/filesystem.sh@29 -- # i=0 00:07:46.995 20:39:11 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:46.995 20:39:11 -- target/filesystem.sh@37 -- # kill -0 2606544 00:07:46.995 20:39:11 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:46.995 20:39:11 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:46.995 20:39:11 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:46.995 20:39:11 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:46.995 00:07:46.995 real 0m3.087s 00:07:46.995 user 0m0.031s 00:07:46.995 sys 0m0.073s 00:07:46.995 20:39:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:46.995 20:39:11 -- common/autotest_common.sh@10 -- # set +x 00:07:46.995 ************************************ 00:07:46.995 END TEST filesystem_in_capsule_xfs 00:07:46.995 ************************************ 00:07:47.255 20:39:11 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:47.529 20:39:11 -- target/filesystem.sh@93 -- # sync 00:07:47.529 20:39:11 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:47.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.529 20:39:12 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:47.529 20:39:12 -- common/autotest_common.sh@1205 -- # local i=0 00:07:47.529 20:39:12 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:07:47.529 20:39:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:47.529 20:39:12 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:07:47.529 20:39:12 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:47.529 20:39:12 -- common/autotest_common.sh@1217 -- # return 0 00:07:47.529 20:39:12 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:47.529 20:39:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.529 20:39:12 -- common/autotest_common.sh@10 -- # set +x 00:07:47.529 20:39:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.529 20:39:12 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:47.529 20:39:12 -- target/filesystem.sh@101 -- # killprocess 2606544 00:07:47.529 20:39:12 -- common/autotest_common.sh@936 -- # '[' -z 2606544 ']' 00:07:47.529 20:39:12 -- common/autotest_common.sh@940 -- # kill -0 2606544 00:07:47.529 20:39:12 -- common/autotest_common.sh@941 -- # uname 00:07:47.529 20:39:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:47.529 20:39:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2606544 00:07:47.529 20:39:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:47.529 20:39:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:47.529 20:39:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2606544' 00:07:47.529 killing process with pid 2606544 00:07:47.529 20:39:12 -- common/autotest_common.sh@955 -- # kill 2606544 00:07:47.529 20:39:12 -- common/autotest_common.sh@960 -- # wait 2606544 00:07:47.789 20:39:12 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:47.789 00:07:47.789 real 0m13.350s 00:07:47.789 user 0m52.714s 00:07:47.789 sys 0m1.426s 00:07:47.789 20:39:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:47.789 20:39:12 -- common/autotest_common.sh@10 -- # set +x 00:07:47.789 ************************************ 00:07:47.789 END TEST nvmf_filesystem_in_capsule 00:07:47.789 ************************************ 00:07:48.050 20:39:12 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:48.050 20:39:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:48.050 20:39:12 -- nvmf/common.sh@117 -- # sync 00:07:48.050 20:39:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.050 20:39:12 -- nvmf/common.sh@120 -- # set +e 00:07:48.050 20:39:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.050 20:39:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.050 rmmod nvme_tcp 00:07:48.050 rmmod nvme_fabrics 00:07:48.050 rmmod nvme_keyring 00:07:48.050 20:39:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.050 20:39:12 -- nvmf/common.sh@124 -- # set -e 00:07:48.050 20:39:12 -- nvmf/common.sh@125 -- # return 0 00:07:48.050 20:39:12 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:07:48.050 20:39:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:48.050 20:39:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:48.050 20:39:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:48.050 20:39:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.050 20:39:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.050 20:39:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.050 20:39:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:48.050 20:39:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.962 20:39:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:49.962 00:07:49.962 real 0m35.775s 00:07:49.962 user 1m42.685s 00:07:49.962 sys 0m8.789s 00:07:49.962 20:39:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.962 20:39:14 -- common/autotest_common.sh@10 -- # set +x 00:07:49.962 ************************************ 00:07:49.962 END TEST nvmf_filesystem 00:07:49.962 ************************************ 00:07:50.223 20:39:14 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:50.223 20:39:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:50.223 20:39:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.223 20:39:14 -- common/autotest_common.sh@10 -- # set +x 00:07:50.223 ************************************ 00:07:50.223 START TEST nvmf_discovery 00:07:50.223 ************************************ 00:07:50.223 20:39:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:50.223 * Looking for test storage... 00:07:50.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.223 20:39:14 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.223 20:39:14 -- nvmf/common.sh@7 -- # uname -s 00:07:50.223 20:39:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.223 20:39:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.223 20:39:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.223 20:39:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.223 20:39:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.223 20:39:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.223 20:39:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.223 20:39:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.223 20:39:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.223 20:39:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.223 20:39:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:07:50.223 20:39:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:07:50.223 20:39:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.223 20:39:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.223 20:39:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.223 20:39:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.223 20:39:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.223 20:39:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.223 20:39:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.223 20:39:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.223 20:39:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.223 20:39:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.223 20:39:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.223 20:39:14 -- paths/export.sh@5 -- # export PATH 00:07:50.223 20:39:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.223 20:39:14 -- nvmf/common.sh@47 -- # : 0 00:07:50.223 20:39:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.223 20:39:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.223 20:39:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.223 20:39:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.223 20:39:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.484 20:39:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.484 20:39:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.484 20:39:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.484 20:39:14 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:50.484 20:39:14 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:50.484 20:39:14 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:50.484 20:39:14 -- target/discovery.sh@15 -- # hash nvme 00:07:50.484 20:39:14 -- target/discovery.sh@20 -- # nvmftestinit 00:07:50.484 20:39:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:07:50.484 20:39:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.484 20:39:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:07:50.484 20:39:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:07:50.484 20:39:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:07:50.484 20:39:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.484 20:39:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.484 20:39:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.484 20:39:14 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:07:50.484 20:39:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:07:50.484 20:39:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:07:50.484 20:39:14 -- common/autotest_common.sh@10 -- # set +x 00:07:58.663 20:39:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:58.663 20:39:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:07:58.663 20:39:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:58.663 20:39:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:58.663 20:39:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:58.663 20:39:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:58.663 20:39:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:58.663 20:39:21 -- nvmf/common.sh@295 -- # net_devs=() 00:07:58.663 20:39:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:58.663 20:39:21 -- nvmf/common.sh@296 -- # e810=() 00:07:58.663 20:39:21 -- nvmf/common.sh@296 -- # local -ga e810 00:07:58.663 20:39:21 -- nvmf/common.sh@297 -- # x722=() 00:07:58.663 20:39:21 -- nvmf/common.sh@297 -- # local -ga x722 00:07:58.663 20:39:21 -- nvmf/common.sh@298 -- # mlx=() 00:07:58.663 20:39:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:07:58.663 20:39:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.663 20:39:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.663 20:39:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.663 20:39:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.663 20:39:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.663 20:39:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.663 20:39:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.663 20:39:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.663 20:39:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.663 20:39:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.663 20:39:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.663 20:39:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:58.663 20:39:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:58.663 20:39:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:58.663 20:39:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:58.663 20:39:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:58.663 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:58.663 20:39:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:58.663 20:39:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:58.663 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:58.663 20:39:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:58.663 20:39:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:58.663 20:39:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.663 20:39:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:58.663 20:39:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.663 20:39:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:58.663 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:58.663 20:39:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.663 20:39:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:58.663 20:39:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.663 20:39:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:07:58.663 20:39:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.663 20:39:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:58.663 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:58.663 20:39:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.663 20:39:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:07:58.663 20:39:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:07:58.663 20:39:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:07:58.663 20:39:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:07:58.663 20:39:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.663 20:39:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.663 20:39:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.663 20:39:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:58.663 20:39:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.663 20:39:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.663 20:39:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:58.663 20:39:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.663 20:39:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.663 20:39:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:58.663 20:39:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:58.663 20:39:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.663 20:39:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.663 20:39:21 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.663 20:39:21 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.663 20:39:21 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:58.663 20:39:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.663 20:39:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.663 20:39:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.663 20:39:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:58.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:07:58.663 00:07:58.663 --- 10.0.0.2 ping statistics --- 00:07:58.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.663 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:07:58.663 20:39:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:07:58.663 00:07:58.663 --- 10.0.0.1 ping statistics --- 00:07:58.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.663 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:07:58.663 20:39:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.663 20:39:22 -- nvmf/common.sh@411 -- # return 0 00:07:58.663 20:39:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:07:58.663 20:39:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.663 20:39:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:07:58.663 20:39:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:07:58.663 20:39:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.663 20:39:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:07:58.663 20:39:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:07:58.663 20:39:22 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:58.663 20:39:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:07:58.663 20:39:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:07:58.663 20:39:22 -- common/autotest_common.sh@10 -- # set +x 00:07:58.663 20:39:22 -- nvmf/common.sh@470 -- # nvmfpid=2614053 00:07:58.663 20:39:22 -- nvmf/common.sh@471 -- # waitforlisten 2614053 00:07:58.663 20:39:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:58.663 20:39:22 -- common/autotest_common.sh@817 -- # '[' -z 2614053 ']' 00:07:58.663 20:39:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.663 20:39:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:58.663 20:39:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.663 20:39:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:58.663 20:39:22 -- common/autotest_common.sh@10 -- # set +x 00:07:58.663 [2024-04-24 20:39:22.215577] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:07:58.663 [2024-04-24 20:39:22.215640] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.663 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.663 [2024-04-24 20:39:22.305782] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.663 [2024-04-24 20:39:22.401000] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.663 [2024-04-24 20:39:22.401063] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.663 [2024-04-24 20:39:22.401071] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:58.663 [2024-04-24 20:39:22.401077] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:58.663 [2024-04-24 20:39:22.401084] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.663 [2024-04-24 20:39:22.401233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.663 [2024-04-24 20:39:22.401379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.663 [2024-04-24 20:39:22.401550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.663 [2024-04-24 20:39:22.401550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.663 20:39:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:58.663 20:39:23 -- common/autotest_common.sh@850 -- # return 0 00:07:58.663 20:39:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:07:58.663 20:39:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:58.663 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.663 20:39:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.663 20:39:23 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:58.663 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.663 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.663 [2024-04-24 20:39:23.140573] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.663 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.663 20:39:23 -- target/discovery.sh@26 -- # seq 1 4 00:07:58.663 20:39:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:58.663 20:39:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.664 Null1 00:07:58.664 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.664 20:39:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.664 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.664 20:39:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.664 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.664 20:39:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.664 [2024-04-24 20:39:23.200895] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.664 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.664 20:39:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:58.664 20:39:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.664 Null2 00:07:58.664 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.664 20:39:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.664 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.664 20:39:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.664 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.664 20:39:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.664 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.664 20:39:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:58.664 20:39:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.664 Null3 00:07:58.664 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.664 20:39:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.664 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.664 20:39:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.664 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.664 20:39:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.664 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.664 20:39:23 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:58.664 20:39:23 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:58.664 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.664 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.925 Null4 00:07:58.925 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.925 20:39:23 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:58.925 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.925 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.925 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.925 20:39:23 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:58.925 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.925 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.925 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.925 20:39:23 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:58.925 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.925 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.925 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.925 20:39:23 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:58.925 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.925 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.925 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.925 20:39:23 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:58.925 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:58.925 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:58.925 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:58.925 20:39:23 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 4420 00:07:59.185 00:07:59.185 Discovery Log Number of Records 6, Generation counter 6 00:07:59.185 =====Discovery Log Entry 0====== 00:07:59.185 trtype: tcp 00:07:59.185 adrfam: ipv4 00:07:59.185 subtype: current discovery subsystem 00:07:59.185 treq: not required 00:07:59.185 portid: 0 00:07:59.185 trsvcid: 4420 00:07:59.185 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:59.185 traddr: 10.0.0.2 00:07:59.185 eflags: explicit discovery connections, duplicate discovery information 00:07:59.185 sectype: none 00:07:59.185 =====Discovery Log Entry 1====== 00:07:59.185 trtype: tcp 00:07:59.185 adrfam: ipv4 00:07:59.185 subtype: nvme subsystem 00:07:59.185 treq: not required 00:07:59.185 portid: 0 00:07:59.185 trsvcid: 4420 00:07:59.185 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:59.185 traddr: 10.0.0.2 00:07:59.185 eflags: none 00:07:59.185 sectype: none 00:07:59.185 =====Discovery Log Entry 2====== 00:07:59.185 trtype: tcp 00:07:59.185 adrfam: ipv4 00:07:59.185 subtype: nvme subsystem 00:07:59.185 treq: not required 00:07:59.185 portid: 0 00:07:59.185 trsvcid: 4420 00:07:59.185 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:59.185 traddr: 10.0.0.2 00:07:59.185 eflags: none 00:07:59.185 sectype: none 00:07:59.185 =====Discovery Log Entry 3====== 00:07:59.185 trtype: tcp 00:07:59.185 adrfam: ipv4 00:07:59.185 subtype: nvme subsystem 00:07:59.185 treq: not required 00:07:59.185 portid: 0 00:07:59.185 trsvcid: 4420 00:07:59.185 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:59.185 traddr: 10.0.0.2 00:07:59.185 eflags: none 00:07:59.185 sectype: none 00:07:59.185 =====Discovery Log Entry 4====== 00:07:59.185 trtype: tcp 00:07:59.185 adrfam: ipv4 00:07:59.185 subtype: nvme subsystem 00:07:59.185 treq: not required 00:07:59.185 portid: 0 00:07:59.185 trsvcid: 4420 00:07:59.185 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:59.185 traddr: 10.0.0.2 00:07:59.185 eflags: none 00:07:59.185 sectype: none 00:07:59.185 =====Discovery Log Entry 5====== 00:07:59.185 trtype: tcp 00:07:59.185 adrfam: ipv4 00:07:59.185 subtype: discovery subsystem referral 00:07:59.185 treq: not required 00:07:59.185 portid: 0 00:07:59.185 trsvcid: 4430 00:07:59.185 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:59.185 traddr: 10.0.0.2 00:07:59.185 eflags: none 00:07:59.185 sectype: none 00:07:59.185 20:39:23 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:59.185 Perform nvmf subsystem discovery via RPC 00:07:59.185 20:39:23 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:59.185 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.185 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:59.185 [2024-04-24 20:39:23.585997] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:59.185 [ 00:07:59.185 { 00:07:59.185 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:59.185 "subtype": "Discovery", 00:07:59.185 "listen_addresses": [ 00:07:59.185 { 00:07:59.185 "transport": "TCP", 00:07:59.185 "trtype": "TCP", 00:07:59.185 "adrfam": "IPv4", 00:07:59.186 "traddr": "10.0.0.2", 00:07:59.186 "trsvcid": "4420" 00:07:59.186 } 00:07:59.186 ], 00:07:59.186 "allow_any_host": true, 00:07:59.186 "hosts": [] 00:07:59.186 }, 00:07:59.186 { 00:07:59.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:59.186 "subtype": "NVMe", 00:07:59.186 "listen_addresses": [ 00:07:59.186 { 00:07:59.186 "transport": "TCP", 00:07:59.186 "trtype": "TCP", 00:07:59.186 "adrfam": "IPv4", 00:07:59.186 "traddr": "10.0.0.2", 00:07:59.186 "trsvcid": "4420" 00:07:59.186 } 00:07:59.186 ], 00:07:59.186 "allow_any_host": true, 00:07:59.186 "hosts": [], 00:07:59.186 "serial_number": "SPDK00000000000001", 00:07:59.186 "model_number": "SPDK bdev Controller", 00:07:59.186 "max_namespaces": 32, 00:07:59.186 "min_cntlid": 1, 00:07:59.186 "max_cntlid": 65519, 00:07:59.186 "namespaces": [ 00:07:59.186 { 00:07:59.186 "nsid": 1, 00:07:59.186 "bdev_name": "Null1", 00:07:59.186 "name": "Null1", 00:07:59.186 "nguid": "9D6D1C75C64142279AB5E9EC6E074905", 00:07:59.186 "uuid": "9d6d1c75-c641-4227-9ab5-e9ec6e074905" 00:07:59.186 } 00:07:59.186 ] 00:07:59.186 }, 00:07:59.186 { 00:07:59.186 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:59.186 "subtype": "NVMe", 00:07:59.186 "listen_addresses": [ 00:07:59.186 { 00:07:59.186 "transport": "TCP", 00:07:59.186 "trtype": "TCP", 00:07:59.186 "adrfam": "IPv4", 00:07:59.186 "traddr": "10.0.0.2", 00:07:59.186 "trsvcid": "4420" 00:07:59.186 } 00:07:59.186 ], 00:07:59.186 "allow_any_host": true, 00:07:59.186 "hosts": [], 00:07:59.186 "serial_number": "SPDK00000000000002", 00:07:59.186 "model_number": "SPDK bdev Controller", 00:07:59.186 "max_namespaces": 32, 00:07:59.186 "min_cntlid": 1, 00:07:59.186 "max_cntlid": 65519, 00:07:59.186 "namespaces": [ 00:07:59.186 { 00:07:59.186 "nsid": 1, 00:07:59.186 "bdev_name": "Null2", 00:07:59.186 "name": "Null2", 00:07:59.186 "nguid": "5CA3B54DCB1D487391D15909BD0637A9", 00:07:59.186 "uuid": "5ca3b54d-cb1d-4873-91d1-5909bd0637a9" 00:07:59.186 } 00:07:59.186 ] 00:07:59.186 }, 00:07:59.186 { 00:07:59.186 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:59.186 "subtype": "NVMe", 00:07:59.186 "listen_addresses": [ 00:07:59.186 { 00:07:59.186 "transport": "TCP", 00:07:59.186 "trtype": "TCP", 00:07:59.186 "adrfam": "IPv4", 00:07:59.186 "traddr": "10.0.0.2", 00:07:59.186 "trsvcid": "4420" 00:07:59.186 } 00:07:59.186 ], 00:07:59.186 "allow_any_host": true, 00:07:59.186 "hosts": [], 00:07:59.186 "serial_number": "SPDK00000000000003", 00:07:59.186 "model_number": "SPDK bdev Controller", 00:07:59.186 "max_namespaces": 32, 00:07:59.186 "min_cntlid": 1, 00:07:59.186 "max_cntlid": 65519, 00:07:59.186 "namespaces": [ 00:07:59.186 { 00:07:59.186 "nsid": 1, 00:07:59.186 "bdev_name": "Null3", 00:07:59.186 "name": "Null3", 00:07:59.186 "nguid": "BF8065ADC0C64CE4949DF8E413A981CE", 00:07:59.186 "uuid": "bf8065ad-c0c6-4ce4-949d-f8e413a981ce" 00:07:59.186 } 00:07:59.186 ] 00:07:59.186 }, 00:07:59.186 { 00:07:59.186 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:59.186 "subtype": "NVMe", 00:07:59.186 "listen_addresses": [ 00:07:59.186 { 00:07:59.186 "transport": "TCP", 00:07:59.186 "trtype": "TCP", 00:07:59.186 "adrfam": "IPv4", 00:07:59.186 "traddr": "10.0.0.2", 00:07:59.186 "trsvcid": "4420" 00:07:59.186 } 00:07:59.186 ], 00:07:59.186 "allow_any_host": true, 00:07:59.186 "hosts": [], 00:07:59.186 "serial_number": "SPDK00000000000004", 00:07:59.186 "model_number": "SPDK bdev Controller", 00:07:59.186 "max_namespaces": 32, 00:07:59.186 "min_cntlid": 1, 00:07:59.186 "max_cntlid": 65519, 00:07:59.186 "namespaces": [ 00:07:59.186 { 00:07:59.186 "nsid": 1, 00:07:59.186 "bdev_name": "Null4", 00:07:59.186 "name": "Null4", 00:07:59.186 "nguid": "8C17CEFAC50C480CACF89089B7A69504", 00:07:59.186 "uuid": "8c17cefa-c50c-480c-acf8-9089b7a69504" 00:07:59.186 } 00:07:59.186 ] 00:07:59.186 } 00:07:59.186 ] 00:07:59.186 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.186 20:39:23 -- target/discovery.sh@42 -- # seq 1 4 00:07:59.186 20:39:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.186 20:39:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:59.186 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.186 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:59.186 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.186 20:39:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:59.186 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.186 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:59.186 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.186 20:39:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.186 20:39:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:59.186 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.186 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:59.186 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.186 20:39:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:59.186 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.186 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:59.186 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.186 20:39:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.186 20:39:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:59.186 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.186 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:59.186 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.186 20:39:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:59.186 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.186 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:59.186 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.186 20:39:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:59.186 20:39:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:59.186 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.186 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:59.186 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.186 20:39:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:59.186 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.186 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:59.186 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.186 20:39:23 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:59.186 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.186 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:59.186 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.186 20:39:23 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:59.186 20:39:23 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:59.186 20:39:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:59.186 20:39:23 -- common/autotest_common.sh@10 -- # set +x 00:07:59.186 20:39:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:59.186 20:39:23 -- target/discovery.sh@49 -- # check_bdevs= 00:07:59.186 20:39:23 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:59.186 20:39:23 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:59.186 20:39:23 -- target/discovery.sh@57 -- # nvmftestfini 00:07:59.186 20:39:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:07:59.186 20:39:23 -- nvmf/common.sh@117 -- # sync 00:07:59.186 20:39:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:59.186 20:39:23 -- nvmf/common.sh@120 -- # set +e 00:07:59.186 20:39:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:59.186 20:39:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:59.186 rmmod nvme_tcp 00:07:59.186 rmmod nvme_fabrics 00:07:59.186 rmmod nvme_keyring 00:07:59.186 20:39:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:59.186 20:39:23 -- nvmf/common.sh@124 -- # set -e 00:07:59.186 20:39:23 -- nvmf/common.sh@125 -- # return 0 00:07:59.186 20:39:23 -- nvmf/common.sh@478 -- # '[' -n 2614053 ']' 00:07:59.186 20:39:23 -- nvmf/common.sh@479 -- # killprocess 2614053 00:07:59.186 20:39:23 -- common/autotest_common.sh@936 -- # '[' -z 2614053 ']' 00:07:59.186 20:39:23 -- common/autotest_common.sh@940 -- # kill -0 2614053 00:07:59.186 20:39:23 -- common/autotest_common.sh@941 -- # uname 00:07:59.448 20:39:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:59.448 20:39:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2614053 00:07:59.448 20:39:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:59.448 20:39:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:59.448 20:39:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2614053' 00:07:59.448 killing process with pid 2614053 00:07:59.448 20:39:23 -- common/autotest_common.sh@955 -- # kill 2614053 00:07:59.448 [2024-04-24 20:39:23.877835] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:59.448 20:39:23 -- common/autotest_common.sh@960 -- # wait 2614053 00:07:59.448 20:39:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:07:59.448 20:39:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:07:59.448 20:39:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:07:59.448 20:39:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:59.448 20:39:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:59.448 20:39:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.448 20:39:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.448 20:39:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.996 20:39:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:01.996 00:08:01.996 real 0m11.349s 00:08:01.996 user 0m8.748s 00:08:01.996 sys 0m5.814s 00:08:01.996 20:39:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:01.996 20:39:26 -- common/autotest_common.sh@10 -- # set +x 00:08:01.996 ************************************ 00:08:01.996 END TEST nvmf_discovery 00:08:01.996 ************************************ 00:08:01.996 20:39:26 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:01.996 20:39:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:01.996 20:39:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.996 20:39:26 -- common/autotest_common.sh@10 -- # set +x 00:08:01.996 ************************************ 00:08:01.996 START TEST nvmf_referrals 00:08:01.996 ************************************ 00:08:01.996 20:39:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:01.996 * Looking for test storage... 00:08:01.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.996 20:39:26 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.996 20:39:26 -- nvmf/common.sh@7 -- # uname -s 00:08:01.996 20:39:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.996 20:39:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.996 20:39:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.996 20:39:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.996 20:39:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.996 20:39:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.996 20:39:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.996 20:39:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.996 20:39:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.996 20:39:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.996 20:39:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:01.996 20:39:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:01.996 20:39:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.996 20:39:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.996 20:39:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.996 20:39:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.996 20:39:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.996 20:39:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.996 20:39:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.997 20:39:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.997 20:39:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.997 20:39:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.997 20:39:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.997 20:39:26 -- paths/export.sh@5 -- # export PATH 00:08:01.997 20:39:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.997 20:39:26 -- nvmf/common.sh@47 -- # : 0 00:08:01.997 20:39:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.997 20:39:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.997 20:39:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.997 20:39:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.997 20:39:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.997 20:39:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.997 20:39:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.997 20:39:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.997 20:39:26 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:01.997 20:39:26 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:01.997 20:39:26 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:01.997 20:39:26 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:01.997 20:39:26 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:01.997 20:39:26 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:01.997 20:39:26 -- target/referrals.sh@37 -- # nvmftestinit 00:08:01.997 20:39:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:01.997 20:39:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.997 20:39:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:01.997 20:39:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:01.997 20:39:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:01.997 20:39:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.997 20:39:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.997 20:39:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.997 20:39:26 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:01.997 20:39:26 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:01.997 20:39:26 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:01.997 20:39:26 -- common/autotest_common.sh@10 -- # set +x 00:08:10.144 20:39:33 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:10.144 20:39:33 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.144 20:39:33 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.144 20:39:33 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.144 20:39:33 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.144 20:39:33 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.144 20:39:33 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.144 20:39:33 -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.144 20:39:33 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.144 20:39:33 -- nvmf/common.sh@296 -- # e810=() 00:08:10.144 20:39:33 -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.144 20:39:33 -- nvmf/common.sh@297 -- # x722=() 00:08:10.144 20:39:33 -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.144 20:39:33 -- nvmf/common.sh@298 -- # mlx=() 00:08:10.144 20:39:33 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.144 20:39:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.144 20:39:33 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.144 20:39:33 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.144 20:39:33 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.144 20:39:33 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.144 20:39:33 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.144 20:39:33 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.144 20:39:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.144 20:39:33 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.144 20:39:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.144 20:39:33 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.144 20:39:33 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.144 20:39:33 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.144 20:39:33 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.144 20:39:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.144 20:39:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:10.144 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:10.144 20:39:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.144 20:39:33 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:10.144 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:10.144 20:39:33 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.144 20:39:33 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.144 20:39:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.144 20:39:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:10.144 20:39:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.144 20:39:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:10.144 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:10.144 20:39:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.144 20:39:33 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.144 20:39:33 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.144 20:39:33 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:10.144 20:39:33 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.144 20:39:33 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:10.144 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:10.144 20:39:33 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.144 20:39:33 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:10.144 20:39:33 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:10.144 20:39:33 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:10.144 20:39:33 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:10.144 20:39:33 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.144 20:39:33 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.144 20:39:33 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.144 20:39:33 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.144 20:39:33 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.144 20:39:33 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.144 20:39:33 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.144 20:39:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.144 20:39:33 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.144 20:39:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.144 20:39:33 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.144 20:39:33 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.144 20:39:33 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.144 20:39:33 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.144 20:39:33 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.144 20:39:33 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.144 20:39:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.144 20:39:33 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.144 20:39:33 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.144 20:39:33 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:08:10.144 00:08:10.144 --- 10.0.0.2 ping statistics --- 00:08:10.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.144 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:08:10.144 20:39:33 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:08:10.144 00:08:10.144 --- 10.0.0.1 ping statistics --- 00:08:10.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.144 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:08:10.145 20:39:33 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.145 20:39:33 -- nvmf/common.sh@411 -- # return 0 00:08:10.145 20:39:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:10.145 20:39:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.145 20:39:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:10.145 20:39:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:10.145 20:39:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.145 20:39:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:10.145 20:39:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:10.145 20:39:33 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:10.145 20:39:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:10.145 20:39:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:10.145 20:39:33 -- common/autotest_common.sh@10 -- # set +x 00:08:10.145 20:39:33 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.145 20:39:33 -- nvmf/common.sh@470 -- # nvmfpid=2618672 00:08:10.145 20:39:33 -- nvmf/common.sh@471 -- # waitforlisten 2618672 00:08:10.145 20:39:33 -- common/autotest_common.sh@817 -- # '[' -z 2618672 ']' 00:08:10.145 20:39:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.145 20:39:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:10.145 20:39:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.145 20:39:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:10.145 20:39:33 -- common/autotest_common.sh@10 -- # set +x 00:08:10.145 [2024-04-24 20:39:33.745462] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:08:10.145 [2024-04-24 20:39:33.745513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.145 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.145 [2024-04-24 20:39:33.823843] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.145 [2024-04-24 20:39:33.914318] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.145 [2024-04-24 20:39:33.914379] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.145 [2024-04-24 20:39:33.914387] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.145 [2024-04-24 20:39:33.914394] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.145 [2024-04-24 20:39:33.914400] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.145 [2024-04-24 20:39:33.914533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.145 [2024-04-24 20:39:33.914678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.145 [2024-04-24 20:39:33.914711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:10.145 [2024-04-24 20:39:33.914733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.145 20:39:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:10.145 20:39:34 -- common/autotest_common.sh@850 -- # return 0 00:08:10.145 20:39:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:10.145 20:39:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:10.145 20:39:34 -- common/autotest_common.sh@10 -- # set +x 00:08:10.145 20:39:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.145 20:39:34 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:10.145 20:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.145 20:39:34 -- common/autotest_common.sh@10 -- # set +x 00:08:10.145 [2024-04-24 20:39:34.703706] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.145 20:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.145 20:39:34 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:10.145 20:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.145 20:39:34 -- common/autotest_common.sh@10 -- # set +x 00:08:10.145 [2024-04-24 20:39:34.719887] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:10.145 20:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.145 20:39:34 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:10.145 20:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.145 20:39:34 -- common/autotest_common.sh@10 -- # set +x 00:08:10.145 20:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.145 20:39:34 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:10.145 20:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.145 20:39:34 -- common/autotest_common.sh@10 -- # set +x 00:08:10.145 20:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.145 20:39:34 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:10.145 20:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.145 20:39:34 -- common/autotest_common.sh@10 -- # set +x 00:08:10.145 20:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.145 20:39:34 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.145 20:39:34 -- target/referrals.sh@48 -- # jq length 00:08:10.145 20:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.145 20:39:34 -- common/autotest_common.sh@10 -- # set +x 00:08:10.145 20:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.407 20:39:34 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:10.407 20:39:34 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:10.407 20:39:34 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:10.407 20:39:34 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:10.407 20:39:34 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.407 20:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.407 20:39:34 -- target/referrals.sh@21 -- # sort 00:08:10.407 20:39:34 -- common/autotest_common.sh@10 -- # set +x 00:08:10.407 20:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.407 20:39:34 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:10.407 20:39:34 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:10.407 20:39:34 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:10.407 20:39:34 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.407 20:39:34 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.407 20:39:34 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.407 20:39:34 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.407 20:39:34 -- target/referrals.sh@26 -- # sort 00:08:10.407 20:39:34 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:10.407 20:39:34 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:10.407 20:39:34 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:10.407 20:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.407 20:39:34 -- common/autotest_common.sh@10 -- # set +x 00:08:10.407 20:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.407 20:39:34 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:10.407 20:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.407 20:39:34 -- common/autotest_common.sh@10 -- # set +x 00:08:10.407 20:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.407 20:39:34 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:10.407 20:39:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.407 20:39:34 -- common/autotest_common.sh@10 -- # set +x 00:08:10.407 20:39:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.407 20:39:35 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.407 20:39:35 -- target/referrals.sh@56 -- # jq length 00:08:10.407 20:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.407 20:39:35 -- common/autotest_common.sh@10 -- # set +x 00:08:10.407 20:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.668 20:39:35 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:10.668 20:39:35 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:10.668 20:39:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.668 20:39:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.668 20:39:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.668 20:39:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.668 20:39:35 -- target/referrals.sh@26 -- # sort 00:08:10.668 20:39:35 -- target/referrals.sh@26 -- # echo 00:08:10.668 20:39:35 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:10.668 20:39:35 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:10.668 20:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.668 20:39:35 -- common/autotest_common.sh@10 -- # set +x 00:08:10.668 20:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.668 20:39:35 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:10.668 20:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.668 20:39:35 -- common/autotest_common.sh@10 -- # set +x 00:08:10.668 20:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.668 20:39:35 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:10.668 20:39:35 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:10.668 20:39:35 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:10.668 20:39:35 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:10.668 20:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:10.668 20:39:35 -- target/referrals.sh@21 -- # sort 00:08:10.668 20:39:35 -- common/autotest_common.sh@10 -- # set +x 00:08:10.668 20:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:10.668 20:39:35 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:10.668 20:39:35 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:10.668 20:39:35 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:10.668 20:39:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:10.668 20:39:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:10.668 20:39:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.668 20:39:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:10.668 20:39:35 -- target/referrals.sh@26 -- # sort 00:08:10.930 20:39:35 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:10.930 20:39:35 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:10.930 20:39:35 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:10.930 20:39:35 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:10.930 20:39:35 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:10.930 20:39:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.930 20:39:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:10.930 20:39:35 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:10.930 20:39:35 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:10.930 20:39:35 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:10.930 20:39:35 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:10.930 20:39:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:10.930 20:39:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:11.191 20:39:35 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:11.191 20:39:35 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:11.191 20:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.191 20:39:35 -- common/autotest_common.sh@10 -- # set +x 00:08:11.191 20:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.191 20:39:35 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:11.191 20:39:35 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:11.191 20:39:35 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.191 20:39:35 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:11.191 20:39:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.191 20:39:35 -- target/referrals.sh@21 -- # sort 00:08:11.191 20:39:35 -- common/autotest_common.sh@10 -- # set +x 00:08:11.191 20:39:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.191 20:39:35 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:11.191 20:39:35 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:11.191 20:39:35 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:11.191 20:39:35 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.191 20:39:35 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.191 20:39:35 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.191 20:39:35 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.191 20:39:35 -- target/referrals.sh@26 -- # sort 00:08:11.452 20:39:35 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:11.452 20:39:35 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:11.452 20:39:35 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:11.452 20:39:35 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:11.452 20:39:35 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:11.452 20:39:35 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.452 20:39:35 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:11.452 20:39:36 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:11.452 20:39:36 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:11.452 20:39:36 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:11.452 20:39:36 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:11.452 20:39:36 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.452 20:39:36 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:11.714 20:39:36 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:11.714 20:39:36 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:11.714 20:39:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.714 20:39:36 -- common/autotest_common.sh@10 -- # set +x 00:08:11.714 20:39:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.714 20:39:36 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:11.714 20:39:36 -- target/referrals.sh@82 -- # jq length 00:08:11.714 20:39:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:11.714 20:39:36 -- common/autotest_common.sh@10 -- # set +x 00:08:11.714 20:39:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:11.714 20:39:36 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:11.714 20:39:36 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:11.714 20:39:36 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:11.714 20:39:36 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:11.714 20:39:36 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:11.714 20:39:36 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:11.714 20:39:36 -- target/referrals.sh@26 -- # sort 00:08:11.976 20:39:36 -- target/referrals.sh@26 -- # echo 00:08:11.976 20:39:36 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:11.976 20:39:36 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:11.976 20:39:36 -- target/referrals.sh@86 -- # nvmftestfini 00:08:11.976 20:39:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:11.976 20:39:36 -- nvmf/common.sh@117 -- # sync 00:08:11.976 20:39:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:11.976 20:39:36 -- nvmf/common.sh@120 -- # set +e 00:08:11.976 20:39:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:11.976 20:39:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:11.976 rmmod nvme_tcp 00:08:11.976 rmmod nvme_fabrics 00:08:11.976 rmmod nvme_keyring 00:08:11.976 20:39:36 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.976 20:39:36 -- nvmf/common.sh@124 -- # set -e 00:08:11.976 20:39:36 -- nvmf/common.sh@125 -- # return 0 00:08:11.976 20:39:36 -- nvmf/common.sh@478 -- # '[' -n 2618672 ']' 00:08:11.976 20:39:36 -- nvmf/common.sh@479 -- # killprocess 2618672 00:08:11.976 20:39:36 -- common/autotest_common.sh@936 -- # '[' -z 2618672 ']' 00:08:11.976 20:39:36 -- common/autotest_common.sh@940 -- # kill -0 2618672 00:08:11.976 20:39:36 -- common/autotest_common.sh@941 -- # uname 00:08:11.976 20:39:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:11.976 20:39:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2618672 00:08:11.976 20:39:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:11.976 20:39:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:11.976 20:39:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2618672' 00:08:11.976 killing process with pid 2618672 00:08:11.976 20:39:36 -- common/autotest_common.sh@955 -- # kill 2618672 00:08:11.976 20:39:36 -- common/autotest_common.sh@960 -- # wait 2618672 00:08:11.976 20:39:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:11.976 20:39:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:12.237 20:39:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:12.237 20:39:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.237 20:39:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.237 20:39:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.237 20:39:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.237 20:39:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.155 20:39:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:14.155 00:08:14.155 real 0m12.398s 00:08:14.155 user 0m13.830s 00:08:14.155 sys 0m6.112s 00:08:14.155 20:39:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:14.155 20:39:38 -- common/autotest_common.sh@10 -- # set +x 00:08:14.155 ************************************ 00:08:14.155 END TEST nvmf_referrals 00:08:14.155 ************************************ 00:08:14.155 20:39:38 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:14.155 20:39:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:14.155 20:39:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.155 20:39:38 -- common/autotest_common.sh@10 -- # set +x 00:08:14.461 ************************************ 00:08:14.461 START TEST nvmf_connect_disconnect 00:08:14.461 ************************************ 00:08:14.461 20:39:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:14.461 * Looking for test storage... 00:08:14.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.462 20:39:39 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.462 20:39:39 -- nvmf/common.sh@7 -- # uname -s 00:08:14.462 20:39:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.462 20:39:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.462 20:39:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.462 20:39:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.462 20:39:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.462 20:39:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.462 20:39:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.462 20:39:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.462 20:39:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.462 20:39:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.462 20:39:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:14.462 20:39:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:14.462 20:39:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.462 20:39:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.462 20:39:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.462 20:39:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.462 20:39:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.462 20:39:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.462 20:39:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.462 20:39:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.462 20:39:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.462 20:39:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.462 20:39:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.462 20:39:39 -- paths/export.sh@5 -- # export PATH 00:08:14.462 20:39:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.462 20:39:39 -- nvmf/common.sh@47 -- # : 0 00:08:14.462 20:39:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.462 20:39:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.462 20:39:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.462 20:39:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.462 20:39:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.462 20:39:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.462 20:39:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.462 20:39:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.462 20:39:39 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.462 20:39:39 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.462 20:39:39 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:14.462 20:39:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:14.462 20:39:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.462 20:39:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:14.462 20:39:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:14.462 20:39:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:14.462 20:39:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.462 20:39:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.462 20:39:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.462 20:39:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:14.462 20:39:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:14.462 20:39:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:14.462 20:39:39 -- common/autotest_common.sh@10 -- # set +x 00:08:21.061 20:39:45 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:21.061 20:39:45 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.061 20:39:45 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.061 20:39:45 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.061 20:39:45 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.061 20:39:45 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.061 20:39:45 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.061 20:39:45 -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.061 20:39:45 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.061 20:39:45 -- nvmf/common.sh@296 -- # e810=() 00:08:21.061 20:39:45 -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.061 20:39:45 -- nvmf/common.sh@297 -- # x722=() 00:08:21.061 20:39:45 -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.061 20:39:45 -- nvmf/common.sh@298 -- # mlx=() 00:08:21.061 20:39:45 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.061 20:39:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.061 20:39:45 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.061 20:39:45 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.061 20:39:45 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.061 20:39:45 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.061 20:39:45 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.061 20:39:45 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.061 20:39:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.061 20:39:45 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.061 20:39:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.061 20:39:45 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.061 20:39:45 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.061 20:39:45 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.061 20:39:45 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.061 20:39:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.061 20:39:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:21.061 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:21.061 20:39:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.061 20:39:45 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:21.061 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:21.061 20:39:45 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.061 20:39:45 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.061 20:39:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.061 20:39:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:21.061 20:39:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.061 20:39:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:21.061 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:21.061 20:39:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.061 20:39:45 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.061 20:39:45 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.061 20:39:45 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:21.061 20:39:45 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.061 20:39:45 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:21.061 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:21.061 20:39:45 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.061 20:39:45 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:21.061 20:39:45 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:21.061 20:39:45 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:21.061 20:39:45 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:21.061 20:39:45 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.061 20:39:45 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.061 20:39:45 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.061 20:39:45 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.061 20:39:45 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.061 20:39:45 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.061 20:39:45 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.061 20:39:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.061 20:39:45 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.061 20:39:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.061 20:39:45 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.061 20:39:45 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.061 20:39:45 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.324 20:39:45 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.324 20:39:45 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.324 20:39:45 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.324 20:39:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.324 20:39:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.324 20:39:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.324 20:39:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:08:21.324 00:08:21.324 --- 10.0.0.2 ping statistics --- 00:08:21.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.324 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:08:21.324 20:39:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:08:21.324 00:08:21.324 --- 10.0.0.1 ping statistics --- 00:08:21.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.324 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:21.324 20:39:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.324 20:39:45 -- nvmf/common.sh@411 -- # return 0 00:08:21.324 20:39:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:21.324 20:39:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.324 20:39:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:21.324 20:39:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:21.324 20:39:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.324 20:39:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:21.324 20:39:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:21.586 20:39:45 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:21.586 20:39:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:21.586 20:39:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:21.586 20:39:45 -- common/autotest_common.sh@10 -- # set +x 00:08:21.586 20:39:45 -- nvmf/common.sh@470 -- # nvmfpid=2623523 00:08:21.586 20:39:46 -- nvmf/common.sh@471 -- # waitforlisten 2623523 00:08:21.586 20:39:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:21.586 20:39:46 -- common/autotest_common.sh@817 -- # '[' -z 2623523 ']' 00:08:21.586 20:39:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.586 20:39:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:21.586 20:39:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.586 20:39:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:21.586 20:39:46 -- common/autotest_common.sh@10 -- # set +x 00:08:21.586 [2024-04-24 20:39:46.052268] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:08:21.586 [2024-04-24 20:39:46.052331] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.586 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.586 [2024-04-24 20:39:46.141138] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:21.848 [2024-04-24 20:39:46.234574] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.848 [2024-04-24 20:39:46.234641] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.848 [2024-04-24 20:39:46.234650] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.848 [2024-04-24 20:39:46.234656] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.848 [2024-04-24 20:39:46.234663] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.848 [2024-04-24 20:39:46.234769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.848 [2024-04-24 20:39:46.234844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:21.848 [2024-04-24 20:39:46.234993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.848 [2024-04-24 20:39:46.234994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:22.419 20:39:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:22.419 20:39:46 -- common/autotest_common.sh@850 -- # return 0 00:08:22.419 20:39:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:22.419 20:39:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:22.419 20:39:46 -- common/autotest_common.sh@10 -- # set +x 00:08:22.419 20:39:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.419 20:39:46 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:22.419 20:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.419 20:39:46 -- common/autotest_common.sh@10 -- # set +x 00:08:22.419 [2024-04-24 20:39:46.976589] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.419 20:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.419 20:39:46 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:22.419 20:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.419 20:39:46 -- common/autotest_common.sh@10 -- # set +x 00:08:22.419 20:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.419 20:39:47 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:22.419 20:39:47 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:22.419 20:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.419 20:39:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.419 20:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.419 20:39:47 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:22.419 20:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.419 20:39:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.419 20:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.419 20:39:47 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.419 20:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:22.419 20:39:47 -- common/autotest_common.sh@10 -- # set +x 00:08:22.419 [2024-04-24 20:39:47.035998] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.419 20:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:22.419 20:39:47 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:22.419 20:39:47 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:22.419 20:39:47 -- target/connect_disconnect.sh@34 -- # set +x 00:08:26.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.429 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.755 20:40:05 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:40.755 20:40:05 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:40.755 20:40:05 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:40.755 20:40:05 -- nvmf/common.sh@117 -- # sync 00:08:40.755 20:40:05 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.755 20:40:05 -- nvmf/common.sh@120 -- # set +e 00:08:40.755 20:40:05 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.755 20:40:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.755 rmmod nvme_tcp 00:08:40.755 rmmod nvme_fabrics 00:08:40.755 rmmod nvme_keyring 00:08:40.755 20:40:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.755 20:40:05 -- nvmf/common.sh@124 -- # set -e 00:08:40.755 20:40:05 -- nvmf/common.sh@125 -- # return 0 00:08:40.755 20:40:05 -- nvmf/common.sh@478 -- # '[' -n 2623523 ']' 00:08:40.755 20:40:05 -- nvmf/common.sh@479 -- # killprocess 2623523 00:08:40.755 20:40:05 -- common/autotest_common.sh@936 -- # '[' -z 2623523 ']' 00:08:40.755 20:40:05 -- common/autotest_common.sh@940 -- # kill -0 2623523 00:08:40.755 20:40:05 -- common/autotest_common.sh@941 -- # uname 00:08:40.755 20:40:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:40.755 20:40:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2623523 00:08:40.755 20:40:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:40.755 20:40:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:40.755 20:40:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2623523' 00:08:40.755 killing process with pid 2623523 00:08:40.755 20:40:05 -- common/autotest_common.sh@955 -- # kill 2623523 00:08:40.755 20:40:05 -- common/autotest_common.sh@960 -- # wait 2623523 00:08:41.016 20:40:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:41.016 20:40:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:41.016 20:40:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:41.016 20:40:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.016 20:40:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.016 20:40:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.016 20:40:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.016 20:40:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.931 20:40:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.192 00:08:43.192 real 0m28.676s 00:08:43.192 user 1m18.572s 00:08:43.192 sys 0m6.675s 00:08:43.192 20:40:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:43.192 20:40:07 -- common/autotest_common.sh@10 -- # set +x 00:08:43.192 ************************************ 00:08:43.192 END TEST nvmf_connect_disconnect 00:08:43.192 ************************************ 00:08:43.192 20:40:07 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:43.192 20:40:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:43.192 20:40:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.192 20:40:07 -- common/autotest_common.sh@10 -- # set +x 00:08:43.192 ************************************ 00:08:43.192 START TEST nvmf_multitarget 00:08:43.192 ************************************ 00:08:43.192 20:40:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:08:43.454 * Looking for test storage... 00:08:43.454 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.454 20:40:07 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.454 20:40:07 -- nvmf/common.sh@7 -- # uname -s 00:08:43.454 20:40:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.454 20:40:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.454 20:40:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.454 20:40:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.454 20:40:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.454 20:40:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.454 20:40:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.454 20:40:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.454 20:40:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.454 20:40:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.454 20:40:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:43.454 20:40:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:43.454 20:40:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.454 20:40:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.454 20:40:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.454 20:40:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.454 20:40:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.454 20:40:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.454 20:40:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.454 20:40:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.454 20:40:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.454 20:40:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.454 20:40:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.454 20:40:07 -- paths/export.sh@5 -- # export PATH 00:08:43.454 20:40:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.454 20:40:07 -- nvmf/common.sh@47 -- # : 0 00:08:43.454 20:40:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.454 20:40:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.454 20:40:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.454 20:40:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.454 20:40:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.454 20:40:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.454 20:40:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.454 20:40:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.454 20:40:07 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:43.454 20:40:07 -- target/multitarget.sh@15 -- # nvmftestinit 00:08:43.454 20:40:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:43.454 20:40:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.454 20:40:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:43.454 20:40:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:43.454 20:40:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:43.454 20:40:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.454 20:40:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:43.454 20:40:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.454 20:40:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:43.454 20:40:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:43.454 20:40:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.454 20:40:07 -- common/autotest_common.sh@10 -- # set +x 00:08:51.602 20:40:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:51.602 20:40:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:08:51.602 20:40:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:51.602 20:40:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:51.602 20:40:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:51.602 20:40:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:51.602 20:40:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:51.602 20:40:14 -- nvmf/common.sh@295 -- # net_devs=() 00:08:51.602 20:40:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:51.602 20:40:14 -- nvmf/common.sh@296 -- # e810=() 00:08:51.602 20:40:14 -- nvmf/common.sh@296 -- # local -ga e810 00:08:51.602 20:40:14 -- nvmf/common.sh@297 -- # x722=() 00:08:51.602 20:40:14 -- nvmf/common.sh@297 -- # local -ga x722 00:08:51.602 20:40:14 -- nvmf/common.sh@298 -- # mlx=() 00:08:51.602 20:40:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:08:51.602 20:40:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.602 20:40:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.602 20:40:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.602 20:40:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.602 20:40:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.602 20:40:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.602 20:40:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.602 20:40:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.602 20:40:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.602 20:40:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.602 20:40:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.602 20:40:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:51.602 20:40:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:51.602 20:40:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:51.602 20:40:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.602 20:40:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:51.602 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:51.602 20:40:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:51.602 20:40:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:51.602 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:51.602 20:40:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:51.602 20:40:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:51.602 20:40:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.602 20:40:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.602 20:40:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:51.602 20:40:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.602 20:40:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:51.602 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:51.602 20:40:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.602 20:40:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:51.602 20:40:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.602 20:40:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:08:51.602 20:40:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.602 20:40:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:51.602 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:51.602 20:40:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.602 20:40:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:08:51.603 20:40:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:08:51.603 20:40:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:08:51.603 20:40:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:08:51.603 20:40:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:08:51.603 20:40:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.603 20:40:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.603 20:40:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.603 20:40:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:51.603 20:40:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.603 20:40:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.603 20:40:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:51.603 20:40:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.603 20:40:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.603 20:40:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:51.603 20:40:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:51.603 20:40:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.603 20:40:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.603 20:40:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.603 20:40:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.603 20:40:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:51.603 20:40:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.603 20:40:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.603 20:40:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.603 20:40:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:51.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:08:51.603 00:08:51.603 --- 10.0.0.2 ping statistics --- 00:08:51.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.603 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:08:51.603 20:40:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:08:51.603 00:08:51.603 --- 10.0.0.1 ping statistics --- 00:08:51.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.603 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:08:51.603 20:40:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.603 20:40:15 -- nvmf/common.sh@411 -- # return 0 00:08:51.603 20:40:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:51.603 20:40:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.603 20:40:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:51.603 20:40:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:51.603 20:40:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.603 20:40:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:51.603 20:40:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:51.603 20:40:15 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:51.603 20:40:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:51.603 20:40:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:51.603 20:40:15 -- common/autotest_common.sh@10 -- # set +x 00:08:51.603 20:40:15 -- nvmf/common.sh@470 -- # nvmfpid=2631438 00:08:51.603 20:40:15 -- nvmf/common.sh@471 -- # waitforlisten 2631438 00:08:51.603 20:40:15 -- common/autotest_common.sh@817 -- # '[' -z 2631438 ']' 00:08:51.603 20:40:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.603 20:40:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:51.603 20:40:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.603 20:40:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:51.603 20:40:15 -- common/autotest_common.sh@10 -- # set +x 00:08:51.603 20:40:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.603 [2024-04-24 20:40:15.152543] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:08:51.603 [2024-04-24 20:40:15.152592] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.603 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.603 [2024-04-24 20:40:15.229487] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.603 [2024-04-24 20:40:15.325622] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.603 [2024-04-24 20:40:15.325682] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.603 [2024-04-24 20:40:15.325691] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.603 [2024-04-24 20:40:15.325697] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.603 [2024-04-24 20:40:15.325703] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.603 [2024-04-24 20:40:15.325782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.603 [2024-04-24 20:40:15.325929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.603 [2024-04-24 20:40:15.326100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.603 [2024-04-24 20:40:15.326100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.603 20:40:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:51.603 20:40:16 -- common/autotest_common.sh@850 -- # return 0 00:08:51.603 20:40:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:51.603 20:40:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:51.603 20:40:16 -- common/autotest_common.sh@10 -- # set +x 00:08:51.603 20:40:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.603 20:40:16 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:51.603 20:40:16 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:51.603 20:40:16 -- target/multitarget.sh@21 -- # jq length 00:08:51.603 20:40:16 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:51.603 20:40:16 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:51.864 "nvmf_tgt_1" 00:08:51.864 20:40:16 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:51.864 "nvmf_tgt_2" 00:08:51.864 20:40:16 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:51.864 20:40:16 -- target/multitarget.sh@28 -- # jq length 00:08:52.124 20:40:16 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:52.124 20:40:16 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:52.124 true 00:08:52.124 20:40:16 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:52.385 true 00:08:52.385 20:40:16 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:52.385 20:40:16 -- target/multitarget.sh@35 -- # jq length 00:08:52.385 20:40:16 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:52.385 20:40:16 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:52.385 20:40:16 -- target/multitarget.sh@41 -- # nvmftestfini 00:08:52.385 20:40:16 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:52.385 20:40:16 -- nvmf/common.sh@117 -- # sync 00:08:52.385 20:40:16 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:52.385 20:40:16 -- nvmf/common.sh@120 -- # set +e 00:08:52.385 20:40:16 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:52.385 20:40:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:52.385 rmmod nvme_tcp 00:08:52.385 rmmod nvme_fabrics 00:08:52.385 rmmod nvme_keyring 00:08:52.385 20:40:16 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:52.385 20:40:16 -- nvmf/common.sh@124 -- # set -e 00:08:52.385 20:40:16 -- nvmf/common.sh@125 -- # return 0 00:08:52.385 20:40:16 -- nvmf/common.sh@478 -- # '[' -n 2631438 ']' 00:08:52.385 20:40:16 -- nvmf/common.sh@479 -- # killprocess 2631438 00:08:52.385 20:40:16 -- common/autotest_common.sh@936 -- # '[' -z 2631438 ']' 00:08:52.385 20:40:16 -- common/autotest_common.sh@940 -- # kill -0 2631438 00:08:52.385 20:40:16 -- common/autotest_common.sh@941 -- # uname 00:08:52.385 20:40:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:52.385 20:40:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2631438 00:08:52.645 20:40:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:52.645 20:40:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:52.646 20:40:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2631438' 00:08:52.646 killing process with pid 2631438 00:08:52.646 20:40:17 -- common/autotest_common.sh@955 -- # kill 2631438 00:08:52.646 20:40:17 -- common/autotest_common.sh@960 -- # wait 2631438 00:08:52.646 20:40:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:52.646 20:40:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:52.646 20:40:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:52.646 20:40:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:52.646 20:40:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:52.646 20:40:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.646 20:40:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.646 20:40:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.191 20:40:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:55.191 00:08:55.191 real 0m11.479s 00:08:55.191 user 0m10.302s 00:08:55.191 sys 0m5.857s 00:08:55.191 20:40:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:55.191 20:40:19 -- common/autotest_common.sh@10 -- # set +x 00:08:55.191 ************************************ 00:08:55.191 END TEST nvmf_multitarget 00:08:55.191 ************************************ 00:08:55.191 20:40:19 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:55.191 20:40:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:55.191 20:40:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:55.191 20:40:19 -- common/autotest_common.sh@10 -- # set +x 00:08:55.191 ************************************ 00:08:55.191 START TEST nvmf_rpc 00:08:55.191 ************************************ 00:08:55.191 20:40:19 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:08:55.191 * Looking for test storage... 00:08:55.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.191 20:40:19 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.191 20:40:19 -- nvmf/common.sh@7 -- # uname -s 00:08:55.191 20:40:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.191 20:40:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.191 20:40:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.191 20:40:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.191 20:40:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.191 20:40:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.191 20:40:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.191 20:40:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.191 20:40:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.191 20:40:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.191 20:40:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:55.191 20:40:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:08:55.191 20:40:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.191 20:40:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.191 20:40:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.191 20:40:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:55.191 20:40:19 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.191 20:40:19 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.191 20:40:19 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.191 20:40:19 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.191 20:40:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.191 20:40:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.191 20:40:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.191 20:40:19 -- paths/export.sh@5 -- # export PATH 00:08:55.191 20:40:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.191 20:40:19 -- nvmf/common.sh@47 -- # : 0 00:08:55.191 20:40:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:55.191 20:40:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:55.191 20:40:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:55.191 20:40:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.191 20:40:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.191 20:40:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:55.191 20:40:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:55.191 20:40:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:55.191 20:40:19 -- target/rpc.sh@11 -- # loops=5 00:08:55.191 20:40:19 -- target/rpc.sh@23 -- # nvmftestinit 00:08:55.191 20:40:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:55.191 20:40:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.191 20:40:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:55.191 20:40:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:55.191 20:40:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:55.191 20:40:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.191 20:40:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.191 20:40:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.191 20:40:19 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:08:55.191 20:40:19 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:08:55.191 20:40:19 -- nvmf/common.sh@285 -- # xtrace_disable 00:08:55.191 20:40:19 -- common/autotest_common.sh@10 -- # set +x 00:09:03.360 20:40:26 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:03.360 20:40:26 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:03.360 20:40:26 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:03.360 20:40:26 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:03.360 20:40:26 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:03.360 20:40:26 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:03.360 20:40:26 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:03.360 20:40:26 -- nvmf/common.sh@295 -- # net_devs=() 00:09:03.360 20:40:26 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:03.360 20:40:26 -- nvmf/common.sh@296 -- # e810=() 00:09:03.360 20:40:26 -- nvmf/common.sh@296 -- # local -ga e810 00:09:03.361 20:40:26 -- nvmf/common.sh@297 -- # x722=() 00:09:03.361 20:40:26 -- nvmf/common.sh@297 -- # local -ga x722 00:09:03.361 20:40:26 -- nvmf/common.sh@298 -- # mlx=() 00:09:03.361 20:40:26 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:03.361 20:40:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.361 20:40:26 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.361 20:40:26 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.361 20:40:26 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.361 20:40:26 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.361 20:40:26 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.361 20:40:26 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.361 20:40:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.361 20:40:26 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.361 20:40:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.361 20:40:26 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.361 20:40:26 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:03.361 20:40:26 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:03.361 20:40:26 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:03.361 20:40:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.361 20:40:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:03.361 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:03.361 20:40:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:03.361 20:40:26 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:03.361 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:03.361 20:40:26 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:03.361 20:40:26 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.361 20:40:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.361 20:40:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:03.361 20:40:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.361 20:40:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:03.361 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:03.361 20:40:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.361 20:40:26 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:03.361 20:40:26 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.361 20:40:26 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:03.361 20:40:26 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.361 20:40:26 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:03.361 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:03.361 20:40:26 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.361 20:40:26 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:03.361 20:40:26 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:03.361 20:40:26 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:03.361 20:40:26 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:03.361 20:40:26 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.361 20:40:26 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.361 20:40:26 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.361 20:40:26 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:03.361 20:40:26 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.361 20:40:26 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.361 20:40:26 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:03.361 20:40:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.361 20:40:26 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.361 20:40:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:03.361 20:40:26 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:03.361 20:40:26 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.361 20:40:26 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.361 20:40:26 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.361 20:40:26 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.361 20:40:26 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:03.361 20:40:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.361 20:40:26 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.362 20:40:26 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.362 20:40:26 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:03.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:09:03.362 00:09:03.362 --- 10.0.0.2 ping statistics --- 00:09:03.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.362 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:09:03.362 20:40:26 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:09:03.362 00:09:03.362 --- 10.0.0.1 ping statistics --- 00:09:03.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.362 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:09:03.362 20:40:26 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.362 20:40:26 -- nvmf/common.sh@411 -- # return 0 00:09:03.362 20:40:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:03.362 20:40:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.362 20:40:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:03.362 20:40:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:03.362 20:40:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.362 20:40:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:03.362 20:40:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:03.362 20:40:26 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:03.362 20:40:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:03.362 20:40:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:03.362 20:40:26 -- common/autotest_common.sh@10 -- # set +x 00:09:03.362 20:40:26 -- nvmf/common.sh@470 -- # nvmfpid=2636047 00:09:03.362 20:40:26 -- nvmf/common.sh@471 -- # waitforlisten 2636047 00:09:03.362 20:40:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:03.362 20:40:26 -- common/autotest_common.sh@817 -- # '[' -z 2636047 ']' 00:09:03.362 20:40:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.362 20:40:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:03.362 20:40:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.362 20:40:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:03.362 20:40:26 -- common/autotest_common.sh@10 -- # set +x 00:09:03.362 [2024-04-24 20:40:26.951798] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:09:03.362 [2024-04-24 20:40:26.951864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.362 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.362 [2024-04-24 20:40:27.039799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.362 [2024-04-24 20:40:27.133785] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.362 [2024-04-24 20:40:27.133846] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.362 [2024-04-24 20:40:27.133854] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.362 [2024-04-24 20:40:27.133861] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.362 [2024-04-24 20:40:27.133867] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.362 [2024-04-24 20:40:27.134001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.362 [2024-04-24 20:40:27.134145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.362 [2024-04-24 20:40:27.134316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.362 [2024-04-24 20:40:27.134317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.362 20:40:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:03.362 20:40:27 -- common/autotest_common.sh@850 -- # return 0 00:09:03.362 20:40:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:03.362 20:40:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:03.362 20:40:27 -- common/autotest_common.sh@10 -- # set +x 00:09:03.362 20:40:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.362 20:40:27 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:03.362 20:40:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.362 20:40:27 -- common/autotest_common.sh@10 -- # set +x 00:09:03.362 20:40:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.362 20:40:27 -- target/rpc.sh@26 -- # stats='{ 00:09:03.362 "tick_rate": 2400000000, 00:09:03.362 "poll_groups": [ 00:09:03.362 { 00:09:03.362 "name": "nvmf_tgt_poll_group_0", 00:09:03.362 "admin_qpairs": 0, 00:09:03.362 "io_qpairs": 0, 00:09:03.362 "current_admin_qpairs": 0, 00:09:03.362 "current_io_qpairs": 0, 00:09:03.362 "pending_bdev_io": 0, 00:09:03.362 "completed_nvme_io": 0, 00:09:03.362 "transports": [] 00:09:03.362 }, 00:09:03.362 { 00:09:03.362 "name": "nvmf_tgt_poll_group_1", 00:09:03.362 "admin_qpairs": 0, 00:09:03.362 "io_qpairs": 0, 00:09:03.362 "current_admin_qpairs": 0, 00:09:03.362 "current_io_qpairs": 0, 00:09:03.362 "pending_bdev_io": 0, 00:09:03.362 "completed_nvme_io": 0, 00:09:03.362 "transports": [] 00:09:03.362 }, 00:09:03.362 { 00:09:03.362 "name": "nvmf_tgt_poll_group_2", 00:09:03.362 "admin_qpairs": 0, 00:09:03.362 "io_qpairs": 0, 00:09:03.362 "current_admin_qpairs": 0, 00:09:03.362 "current_io_qpairs": 0, 00:09:03.362 "pending_bdev_io": 0, 00:09:03.362 "completed_nvme_io": 0, 00:09:03.362 "transports": [] 00:09:03.362 }, 00:09:03.362 { 00:09:03.362 "name": "nvmf_tgt_poll_group_3", 00:09:03.362 "admin_qpairs": 0, 00:09:03.362 "io_qpairs": 0, 00:09:03.362 "current_admin_qpairs": 0, 00:09:03.363 "current_io_qpairs": 0, 00:09:03.363 "pending_bdev_io": 0, 00:09:03.363 "completed_nvme_io": 0, 00:09:03.363 "transports": [] 00:09:03.363 } 00:09:03.363 ] 00:09:03.363 }' 00:09:03.363 20:40:27 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:03.363 20:40:27 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:03.363 20:40:27 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:03.363 20:40:27 -- target/rpc.sh@15 -- # wc -l 00:09:03.363 20:40:27 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:03.363 20:40:27 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:03.363 20:40:27 -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:03.363 20:40:27 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.363 20:40:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.363 20:40:27 -- common/autotest_common.sh@10 -- # set +x 00:09:03.363 [2024-04-24 20:40:27.982856] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.363 20:40:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.363 20:40:27 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:03.363 20:40:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.363 20:40:27 -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 20:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.625 20:40:28 -- target/rpc.sh@33 -- # stats='{ 00:09:03.625 "tick_rate": 2400000000, 00:09:03.625 "poll_groups": [ 00:09:03.625 { 00:09:03.625 "name": "nvmf_tgt_poll_group_0", 00:09:03.625 "admin_qpairs": 0, 00:09:03.625 "io_qpairs": 0, 00:09:03.625 "current_admin_qpairs": 0, 00:09:03.625 "current_io_qpairs": 0, 00:09:03.625 "pending_bdev_io": 0, 00:09:03.625 "completed_nvme_io": 0, 00:09:03.625 "transports": [ 00:09:03.625 { 00:09:03.625 "trtype": "TCP" 00:09:03.625 } 00:09:03.625 ] 00:09:03.625 }, 00:09:03.625 { 00:09:03.625 "name": "nvmf_tgt_poll_group_1", 00:09:03.625 "admin_qpairs": 0, 00:09:03.625 "io_qpairs": 0, 00:09:03.625 "current_admin_qpairs": 0, 00:09:03.625 "current_io_qpairs": 0, 00:09:03.625 "pending_bdev_io": 0, 00:09:03.625 "completed_nvme_io": 0, 00:09:03.625 "transports": [ 00:09:03.625 { 00:09:03.625 "trtype": "TCP" 00:09:03.625 } 00:09:03.625 ] 00:09:03.625 }, 00:09:03.625 { 00:09:03.625 "name": "nvmf_tgt_poll_group_2", 00:09:03.625 "admin_qpairs": 0, 00:09:03.625 "io_qpairs": 0, 00:09:03.625 "current_admin_qpairs": 0, 00:09:03.625 "current_io_qpairs": 0, 00:09:03.625 "pending_bdev_io": 0, 00:09:03.625 "completed_nvme_io": 0, 00:09:03.625 "transports": [ 00:09:03.625 { 00:09:03.625 "trtype": "TCP" 00:09:03.625 } 00:09:03.625 ] 00:09:03.625 }, 00:09:03.625 { 00:09:03.625 "name": "nvmf_tgt_poll_group_3", 00:09:03.625 "admin_qpairs": 0, 00:09:03.625 "io_qpairs": 0, 00:09:03.625 "current_admin_qpairs": 0, 00:09:03.625 "current_io_qpairs": 0, 00:09:03.625 "pending_bdev_io": 0, 00:09:03.625 "completed_nvme_io": 0, 00:09:03.625 "transports": [ 00:09:03.625 { 00:09:03.625 "trtype": "TCP" 00:09:03.625 } 00:09:03.625 ] 00:09:03.625 } 00:09:03.625 ] 00:09:03.625 }' 00:09:03.625 20:40:28 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:03.625 20:40:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:03.625 20:40:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:03.625 20:40:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:03.625 20:40:28 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:03.625 20:40:28 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:03.625 20:40:28 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:03.625 20:40:28 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:03.625 20:40:28 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:03.625 20:40:28 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:03.625 20:40:28 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:03.625 20:40:28 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:03.625 20:40:28 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:03.625 20:40:28 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:03.625 20:40:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.625 20:40:28 -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 Malloc1 00:09:03.625 20:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.625 20:40:28 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:03.625 20:40:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.625 20:40:28 -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 20:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.625 20:40:28 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:03.625 20:40:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.625 20:40:28 -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 20:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.625 20:40:28 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:03.625 20:40:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.625 20:40:28 -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 20:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.625 20:40:28 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.625 20:40:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.625 20:40:28 -- common/autotest_common.sh@10 -- # set +x 00:09:03.625 [2024-04-24 20:40:28.166593] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.625 20:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.625 20:40:28 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:09:03.625 20:40:28 -- common/autotest_common.sh@638 -- # local es=0 00:09:03.625 20:40:28 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:09:03.625 20:40:28 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:03.625 20:40:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:03.625 20:40:28 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:03.625 20:40:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:03.625 20:40:28 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:03.626 20:40:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:03.626 20:40:28 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:03.626 20:40:28 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:03.626 20:40:28 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.2 -s 4420 00:09:03.626 [2024-04-24 20:40:28.193362] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204' 00:09:03.626 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:03.626 could not add new controller: failed to write to nvme-fabrics device 00:09:03.626 20:40:28 -- common/autotest_common.sh@641 -- # es=1 00:09:03.626 20:40:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:03.626 20:40:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:03.626 20:40:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:03.626 20:40:28 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:03.626 20:40:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:03.626 20:40:28 -- common/autotest_common.sh@10 -- # set +x 00:09:03.626 20:40:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:03.626 20:40:28 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:05.540 20:40:29 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:05.540 20:40:29 -- common/autotest_common.sh@1184 -- # local i=0 00:09:05.540 20:40:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:05.540 20:40:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:05.540 20:40:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:07.455 20:40:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:07.455 20:40:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:07.455 20:40:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:07.455 20:40:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:07.455 20:40:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:07.455 20:40:31 -- common/autotest_common.sh@1194 -- # return 0 00:09:07.455 20:40:31 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:07.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.455 20:40:31 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:07.455 20:40:31 -- common/autotest_common.sh@1205 -- # local i=0 00:09:07.455 20:40:31 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:07.455 20:40:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.455 20:40:31 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:07.455 20:40:31 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:07.455 20:40:32 -- common/autotest_common.sh@1217 -- # return 0 00:09:07.455 20:40:32 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:07.455 20:40:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.455 20:40:32 -- common/autotest_common.sh@10 -- # set +x 00:09:07.455 20:40:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.455 20:40:32 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.455 20:40:32 -- common/autotest_common.sh@638 -- # local es=0 00:09:07.456 20:40:32 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.456 20:40:32 -- common/autotest_common.sh@626 -- # local arg=nvme 00:09:07.456 20:40:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:07.456 20:40:32 -- common/autotest_common.sh@630 -- # type -t nvme 00:09:07.456 20:40:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:07.456 20:40:32 -- common/autotest_common.sh@632 -- # type -P nvme 00:09:07.456 20:40:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:07.456 20:40:32 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:09:07.456 20:40:32 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:09:07.456 20:40:32 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:07.456 [2024-04-24 20:40:32.049630] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204' 00:09:07.456 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:07.456 could not add new controller: failed to write to nvme-fabrics device 00:09:07.456 20:40:32 -- common/autotest_common.sh@641 -- # es=1 00:09:07.456 20:40:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:07.456 20:40:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:07.456 20:40:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:07.456 20:40:32 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:07.456 20:40:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:07.456 20:40:32 -- common/autotest_common.sh@10 -- # set +x 00:09:07.456 20:40:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:07.456 20:40:32 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.369 20:40:33 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:09.369 20:40:33 -- common/autotest_common.sh@1184 -- # local i=0 00:09:09.369 20:40:33 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.369 20:40:33 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:09.369 20:40:33 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:11.286 20:40:35 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:11.286 20:40:35 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:11.286 20:40:35 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.286 20:40:35 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:11.286 20:40:35 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.286 20:40:35 -- common/autotest_common.sh@1194 -- # return 0 00:09:11.286 20:40:35 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.287 20:40:35 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.287 20:40:35 -- common/autotest_common.sh@1205 -- # local i=0 00:09:11.287 20:40:35 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:11.287 20:40:35 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.287 20:40:35 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:11.287 20:40:35 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.287 20:40:35 -- common/autotest_common.sh@1217 -- # return 0 00:09:11.287 20:40:35 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.287 20:40:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.287 20:40:35 -- common/autotest_common.sh@10 -- # set +x 00:09:11.287 20:40:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.287 20:40:35 -- target/rpc.sh@81 -- # seq 1 5 00:09:11.287 20:40:35 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:11.287 20:40:35 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:11.287 20:40:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.287 20:40:35 -- common/autotest_common.sh@10 -- # set +x 00:09:11.287 20:40:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.287 20:40:35 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.287 20:40:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.287 20:40:35 -- common/autotest_common.sh@10 -- # set +x 00:09:11.287 [2024-04-24 20:40:35.749893] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.287 20:40:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.287 20:40:35 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:11.287 20:40:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.287 20:40:35 -- common/autotest_common.sh@10 -- # set +x 00:09:11.287 20:40:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.287 20:40:35 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:11.287 20:40:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:11.287 20:40:35 -- common/autotest_common.sh@10 -- # set +x 00:09:11.287 20:40:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:11.287 20:40:35 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:13.201 20:40:37 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:13.201 20:40:37 -- common/autotest_common.sh@1184 -- # local i=0 00:09:13.201 20:40:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:13.201 20:40:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:13.201 20:40:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:15.120 20:40:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:15.120 20:40:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:15.120 20:40:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:15.120 20:40:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:15.120 20:40:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:15.120 20:40:39 -- common/autotest_common.sh@1194 -- # return 0 00:09:15.120 20:40:39 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.120 20:40:39 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.120 20:40:39 -- common/autotest_common.sh@1205 -- # local i=0 00:09:15.120 20:40:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:15.120 20:40:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.120 20:40:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:15.120 20:40:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.120 20:40:39 -- common/autotest_common.sh@1217 -- # return 0 00:09:15.120 20:40:39 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.120 20:40:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.120 20:40:39 -- common/autotest_common.sh@10 -- # set +x 00:09:15.120 20:40:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.120 20:40:39 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.120 20:40:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.120 20:40:39 -- common/autotest_common.sh@10 -- # set +x 00:09:15.120 20:40:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.120 20:40:39 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:15.120 20:40:39 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:15.120 20:40:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.120 20:40:39 -- common/autotest_common.sh@10 -- # set +x 00:09:15.120 20:40:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.120 20:40:39 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.120 20:40:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.120 20:40:39 -- common/autotest_common.sh@10 -- # set +x 00:09:15.120 [2024-04-24 20:40:39.498784] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.120 20:40:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.120 20:40:39 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:15.120 20:40:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.120 20:40:39 -- common/autotest_common.sh@10 -- # set +x 00:09:15.120 20:40:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.120 20:40:39 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:15.120 20:40:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:15.120 20:40:39 -- common/autotest_common.sh@10 -- # set +x 00:09:15.120 20:40:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:15.120 20:40:39 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.508 20:40:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.508 20:40:41 -- common/autotest_common.sh@1184 -- # local i=0 00:09:16.508 20:40:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.508 20:40:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:16.508 20:40:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:18.427 20:40:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:18.427 20:40:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:18.427 20:40:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.427 20:40:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:18.427 20:40:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.427 20:40:43 -- common/autotest_common.sh@1194 -- # return 0 00:09:18.427 20:40:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.691 20:40:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.691 20:40:43 -- common/autotest_common.sh@1205 -- # local i=0 00:09:18.691 20:40:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:18.691 20:40:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.691 20:40:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:18.691 20:40:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.691 20:40:43 -- common/autotest_common.sh@1217 -- # return 0 00:09:18.691 20:40:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.691 20:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.691 20:40:43 -- common/autotest_common.sh@10 -- # set +x 00:09:18.691 20:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.691 20:40:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.691 20:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.691 20:40:43 -- common/autotest_common.sh@10 -- # set +x 00:09:18.691 20:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.691 20:40:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:18.691 20:40:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:18.691 20:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.691 20:40:43 -- common/autotest_common.sh@10 -- # set +x 00:09:18.691 20:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.691 20:40:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.691 20:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.691 20:40:43 -- common/autotest_common.sh@10 -- # set +x 00:09:18.691 [2024-04-24 20:40:43.205694] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.691 20:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.691 20:40:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:18.691 20:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.691 20:40:43 -- common/autotest_common.sh@10 -- # set +x 00:09:18.691 20:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.691 20:40:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:18.691 20:40:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:18.691 20:40:43 -- common/autotest_common.sh@10 -- # set +x 00:09:18.691 20:40:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:18.691 20:40:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:20.643 20:40:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:20.643 20:40:44 -- common/autotest_common.sh@1184 -- # local i=0 00:09:20.643 20:40:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:20.643 20:40:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:20.643 20:40:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:22.560 20:40:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:22.560 20:40:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:22.560 20:40:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:22.560 20:40:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:22.560 20:40:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:22.560 20:40:46 -- common/autotest_common.sh@1194 -- # return 0 00:09:22.560 20:40:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.560 20:40:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.560 20:40:46 -- common/autotest_common.sh@1205 -- # local i=0 00:09:22.560 20:40:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:22.560 20:40:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.560 20:40:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:22.560 20:40:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.560 20:40:46 -- common/autotest_common.sh@1217 -- # return 0 00:09:22.560 20:40:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.560 20:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.560 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 20:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.560 20:40:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.560 20:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.560 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 20:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.560 20:40:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:22.560 20:40:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.560 20:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.560 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 20:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.560 20:40:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.560 20:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.560 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 [2024-04-24 20:40:46.956170] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.560 20:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.560 20:40:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:22.560 20:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.560 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 20:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.560 20:40:46 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.560 20:40:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:22.560 20:40:46 -- common/autotest_common.sh@10 -- # set +x 00:09:22.560 20:40:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:22.560 20:40:46 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:23.977 20:40:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.977 20:40:48 -- common/autotest_common.sh@1184 -- # local i=0 00:09:23.977 20:40:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.977 20:40:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:23.977 20:40:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:26.523 20:40:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:26.523 20:40:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:26.523 20:40:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:26.523 20:40:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:26.523 20:40:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:26.523 20:40:50 -- common/autotest_common.sh@1194 -- # return 0 00:09:26.523 20:40:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.523 20:40:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.523 20:40:50 -- common/autotest_common.sh@1205 -- # local i=0 00:09:26.523 20:40:50 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:26.523 20:40:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.523 20:40:50 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:26.523 20:40:50 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.523 20:40:50 -- common/autotest_common.sh@1217 -- # return 0 00:09:26.523 20:40:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.523 20:40:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.523 20:40:50 -- common/autotest_common.sh@10 -- # set +x 00:09:26.523 20:40:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.523 20:40:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.523 20:40:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.523 20:40:50 -- common/autotest_common.sh@10 -- # set +x 00:09:26.523 20:40:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.523 20:40:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:26.523 20:40:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:26.523 20:40:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.523 20:40:50 -- common/autotest_common.sh@10 -- # set +x 00:09:26.523 20:40:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.523 20:40:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.523 20:40:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.523 20:40:50 -- common/autotest_common.sh@10 -- # set +x 00:09:26.523 [2024-04-24 20:40:50.704523] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.523 20:40:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.523 20:40:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:26.523 20:40:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.523 20:40:50 -- common/autotest_common.sh@10 -- # set +x 00:09:26.523 20:40:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.523 20:40:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:26.524 20:40:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:26.524 20:40:50 -- common/autotest_common.sh@10 -- # set +x 00:09:26.524 20:40:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:26.524 20:40:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:27.909 20:40:52 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.909 20:40:52 -- common/autotest_common.sh@1184 -- # local i=0 00:09:27.909 20:40:52 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.909 20:40:52 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:09:27.909 20:40:52 -- common/autotest_common.sh@1191 -- # sleep 2 00:09:29.820 20:40:54 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:09:29.820 20:40:54 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:09:29.820 20:40:54 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.820 20:40:54 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:09:29.820 20:40:54 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.820 20:40:54 -- common/autotest_common.sh@1194 -- # return 0 00:09:29.820 20:40:54 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:29.820 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.820 20:40:54 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:29.820 20:40:54 -- common/autotest_common.sh@1205 -- # local i=0 00:09:29.820 20:40:54 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:09:29.820 20:40:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.820 20:40:54 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:09:29.820 20:40:54 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:29.820 20:40:54 -- common/autotest_common.sh@1217 -- # return 0 00:09:29.820 20:40:54 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:29.820 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.820 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:29.820 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.820 20:40:54 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.820 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.820 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:29.820 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.820 20:40:54 -- target/rpc.sh@99 -- # seq 1 5 00:09:29.820 20:40:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:29.820 20:40:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:29.820 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.820 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:29.820 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.820 20:40:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.820 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.820 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:29.820 [2024-04-24 20:40:54.452893] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.820 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:29.820 20:40:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:29.820 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:29.820 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.081 20:40:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 [2024-04-24 20:40:54.513021] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.081 20:40:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 [2024-04-24 20:40:54.569210] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.081 20:40:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 [2024-04-24 20:40:54.629414] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.081 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.081 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.081 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.081 20:40:54 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:30.082 20:40:54 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.082 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.082 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.082 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.082 20:40:54 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.082 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.082 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.082 [2024-04-24 20:40:54.693625] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.082 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.082 20:40:54 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:30.082 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.082 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.082 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.082 20:40:54 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.082 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.082 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.082 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.082 20:40:54 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.082 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.082 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.344 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.344 20:40:54 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.344 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.344 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.344 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.344 20:40:54 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:30.344 20:40:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:30.344 20:40:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.344 20:40:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:30.344 20:40:54 -- target/rpc.sh@110 -- # stats='{ 00:09:30.344 "tick_rate": 2400000000, 00:09:30.344 "poll_groups": [ 00:09:30.344 { 00:09:30.344 "name": "nvmf_tgt_poll_group_0", 00:09:30.344 "admin_qpairs": 0, 00:09:30.344 "io_qpairs": 224, 00:09:30.344 "current_admin_qpairs": 0, 00:09:30.344 "current_io_qpairs": 0, 00:09:30.344 "pending_bdev_io": 0, 00:09:30.344 "completed_nvme_io": 224, 00:09:30.344 "transports": [ 00:09:30.344 { 00:09:30.344 "trtype": "TCP" 00:09:30.344 } 00:09:30.344 ] 00:09:30.344 }, 00:09:30.344 { 00:09:30.344 "name": "nvmf_tgt_poll_group_1", 00:09:30.344 "admin_qpairs": 1, 00:09:30.344 "io_qpairs": 223, 00:09:30.344 "current_admin_qpairs": 0, 00:09:30.344 "current_io_qpairs": 0, 00:09:30.344 "pending_bdev_io": 0, 00:09:30.344 "completed_nvme_io": 466, 00:09:30.344 "transports": [ 00:09:30.344 { 00:09:30.344 "trtype": "TCP" 00:09:30.344 } 00:09:30.344 ] 00:09:30.344 }, 00:09:30.344 { 00:09:30.344 "name": "nvmf_tgt_poll_group_2", 00:09:30.344 "admin_qpairs": 6, 00:09:30.344 "io_qpairs": 218, 00:09:30.344 "current_admin_qpairs": 0, 00:09:30.344 "current_io_qpairs": 0, 00:09:30.344 "pending_bdev_io": 0, 00:09:30.344 "completed_nvme_io": 270, 00:09:30.344 "transports": [ 00:09:30.344 { 00:09:30.344 "trtype": "TCP" 00:09:30.344 } 00:09:30.344 ] 00:09:30.344 }, 00:09:30.344 { 00:09:30.344 "name": "nvmf_tgt_poll_group_3", 00:09:30.344 "admin_qpairs": 0, 00:09:30.344 "io_qpairs": 224, 00:09:30.344 "current_admin_qpairs": 0, 00:09:30.344 "current_io_qpairs": 0, 00:09:30.344 "pending_bdev_io": 0, 00:09:30.344 "completed_nvme_io": 279, 00:09:30.344 "transports": [ 00:09:30.344 { 00:09:30.344 "trtype": "TCP" 00:09:30.344 } 00:09:30.344 ] 00:09:30.344 } 00:09:30.344 ] 00:09:30.344 }' 00:09:30.344 20:40:54 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:30.344 20:40:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:30.344 20:40:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:30.344 20:40:54 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.344 20:40:54 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:30.344 20:40:54 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:30.344 20:40:54 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:30.344 20:40:54 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:30.344 20:40:54 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:30.344 20:40:54 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:30.344 20:40:54 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:30.344 20:40:54 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:30.344 20:40:54 -- target/rpc.sh@123 -- # nvmftestfini 00:09:30.344 20:40:54 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:30.344 20:40:54 -- nvmf/common.sh@117 -- # sync 00:09:30.344 20:40:54 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:30.344 20:40:54 -- nvmf/common.sh@120 -- # set +e 00:09:30.344 20:40:54 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:30.344 20:40:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:30.344 rmmod nvme_tcp 00:09:30.344 rmmod nvme_fabrics 00:09:30.344 rmmod nvme_keyring 00:09:30.344 20:40:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:30.344 20:40:54 -- nvmf/common.sh@124 -- # set -e 00:09:30.344 20:40:54 -- nvmf/common.sh@125 -- # return 0 00:09:30.344 20:40:54 -- nvmf/common.sh@478 -- # '[' -n 2636047 ']' 00:09:30.344 20:40:54 -- nvmf/common.sh@479 -- # killprocess 2636047 00:09:30.344 20:40:54 -- common/autotest_common.sh@936 -- # '[' -z 2636047 ']' 00:09:30.344 20:40:54 -- common/autotest_common.sh@940 -- # kill -0 2636047 00:09:30.344 20:40:54 -- common/autotest_common.sh@941 -- # uname 00:09:30.344 20:40:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:30.344 20:40:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2636047 00:09:30.605 20:40:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:30.605 20:40:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:30.605 20:40:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2636047' 00:09:30.605 killing process with pid 2636047 00:09:30.605 20:40:54 -- common/autotest_common.sh@955 -- # kill 2636047 00:09:30.605 20:40:54 -- common/autotest_common.sh@960 -- # wait 2636047 00:09:30.605 20:40:55 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:30.605 20:40:55 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:30.605 20:40:55 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:30.605 20:40:55 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:30.605 20:40:55 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:30.605 20:40:55 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.605 20:40:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.605 20:40:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.148 20:40:57 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.148 00:09:33.148 real 0m37.795s 00:09:33.148 user 1m53.994s 00:09:33.148 sys 0m7.413s 00:09:33.148 20:40:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:33.148 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:09:33.148 ************************************ 00:09:33.148 END TEST nvmf_rpc 00:09:33.148 ************************************ 00:09:33.148 20:40:57 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:33.148 20:40:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:33.148 20:40:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:33.148 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:09:33.148 ************************************ 00:09:33.148 START TEST nvmf_invalid 00:09:33.148 ************************************ 00:09:33.148 20:40:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:33.148 * Looking for test storage... 00:09:33.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.148 20:40:57 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.148 20:40:57 -- nvmf/common.sh@7 -- # uname -s 00:09:33.148 20:40:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.148 20:40:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.148 20:40:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.148 20:40:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.148 20:40:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.148 20:40:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.148 20:40:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.148 20:40:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.148 20:40:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.148 20:40:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.148 20:40:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:33.148 20:40:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:33.148 20:40:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.148 20:40:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.148 20:40:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.148 20:40:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.148 20:40:57 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.148 20:40:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.148 20:40:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.148 20:40:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.148 20:40:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.148 20:40:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.148 20:40:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.148 20:40:57 -- paths/export.sh@5 -- # export PATH 00:09:33.148 20:40:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.148 20:40:57 -- nvmf/common.sh@47 -- # : 0 00:09:33.148 20:40:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.148 20:40:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.148 20:40:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.148 20:40:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.148 20:40:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.148 20:40:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.148 20:40:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.148 20:40:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.148 20:40:57 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:33.148 20:40:57 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.148 20:40:57 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:33.148 20:40:57 -- target/invalid.sh@14 -- # target=foobar 00:09:33.148 20:40:57 -- target/invalid.sh@16 -- # RANDOM=0 00:09:33.148 20:40:57 -- target/invalid.sh@34 -- # nvmftestinit 00:09:33.149 20:40:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:33.149 20:40:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:33.149 20:40:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:33.149 20:40:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:33.149 20:40:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:33.149 20:40:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:33.149 20:40:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:33.149 20:40:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.149 20:40:57 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:33.149 20:40:57 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:33.149 20:40:57 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:33.149 20:40:57 -- common/autotest_common.sh@10 -- # set +x 00:09:39.772 20:41:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:39.772 20:41:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:39.772 20:41:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:39.772 20:41:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:39.772 20:41:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:39.772 20:41:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:39.772 20:41:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:39.772 20:41:04 -- nvmf/common.sh@295 -- # net_devs=() 00:09:39.772 20:41:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:39.772 20:41:04 -- nvmf/common.sh@296 -- # e810=() 00:09:39.772 20:41:04 -- nvmf/common.sh@296 -- # local -ga e810 00:09:39.772 20:41:04 -- nvmf/common.sh@297 -- # x722=() 00:09:39.772 20:41:04 -- nvmf/common.sh@297 -- # local -ga x722 00:09:39.772 20:41:04 -- nvmf/common.sh@298 -- # mlx=() 00:09:39.772 20:41:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:39.772 20:41:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.772 20:41:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.772 20:41:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.772 20:41:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.772 20:41:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.772 20:41:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.772 20:41:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.772 20:41:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.772 20:41:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.772 20:41:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.772 20:41:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.772 20:41:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:39.772 20:41:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:39.772 20:41:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:39.772 20:41:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.772 20:41:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:39.772 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:39.772 20:41:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.772 20:41:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:39.772 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:39.772 20:41:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:39.772 20:41:04 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.772 20:41:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.772 20:41:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:39.772 20:41:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.772 20:41:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:39.772 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:39.772 20:41:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.772 20:41:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.772 20:41:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.772 20:41:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:39.772 20:41:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.772 20:41:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:39.772 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:39.772 20:41:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.772 20:41:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:39.772 20:41:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:39.772 20:41:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:39.772 20:41:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:39.772 20:41:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.773 20:41:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:39.773 20:41:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:39.773 20:41:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:39.773 20:41:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:39.773 20:41:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:39.773 20:41:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:39.773 20:41:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:39.773 20:41:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.773 20:41:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:39.773 20:41:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:39.773 20:41:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:39.773 20:41:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:39.773 20:41:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:39.773 20:41:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:39.773 20:41:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:40.034 20:41:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.034 20:41:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.034 20:41:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.034 20:41:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:40.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:09:40.034 00:09:40.034 --- 10.0.0.2 ping statistics --- 00:09:40.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.034 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:09:40.034 20:41:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:09:40.034 00:09:40.034 --- 10.0.0.1 ping statistics --- 00:09:40.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.034 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:09:40.034 20:41:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.034 20:41:04 -- nvmf/common.sh@411 -- # return 0 00:09:40.034 20:41:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:40.034 20:41:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.034 20:41:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:40.034 20:41:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:40.034 20:41:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.034 20:41:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:40.034 20:41:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:40.034 20:41:04 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:40.034 20:41:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:40.034 20:41:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:40.034 20:41:04 -- common/autotest_common.sh@10 -- # set +x 00:09:40.034 20:41:04 -- nvmf/common.sh@470 -- # nvmfpid=2645911 00:09:40.034 20:41:04 -- nvmf/common.sh@471 -- # waitforlisten 2645911 00:09:40.034 20:41:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:40.034 20:41:04 -- common/autotest_common.sh@817 -- # '[' -z 2645911 ']' 00:09:40.034 20:41:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.034 20:41:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:40.034 20:41:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.034 20:41:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:40.034 20:41:04 -- common/autotest_common.sh@10 -- # set +x 00:09:40.034 [2024-04-24 20:41:04.654987] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:09:40.034 [2024-04-24 20:41:04.655052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.295 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.295 [2024-04-24 20:41:04.745806] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.295 [2024-04-24 20:41:04.840963] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.295 [2024-04-24 20:41:04.841028] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.295 [2024-04-24 20:41:04.841036] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.295 [2024-04-24 20:41:04.841043] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.295 [2024-04-24 20:41:04.841049] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.295 [2024-04-24 20:41:04.841190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.295 [2024-04-24 20:41:04.841335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.295 [2024-04-24 20:41:04.841504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.295 [2024-04-24 20:41:04.841504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:41.236 20:41:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:41.236 20:41:05 -- common/autotest_common.sh@850 -- # return 0 00:09:41.236 20:41:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:41.236 20:41:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:41.236 20:41:05 -- common/autotest_common.sh@10 -- # set +x 00:09:41.236 20:41:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.236 20:41:05 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:41.236 20:41:05 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4593 00:09:41.236 [2024-04-24 20:41:05.766036] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:41.236 20:41:05 -- target/invalid.sh@40 -- # out='request: 00:09:41.236 { 00:09:41.236 "nqn": "nqn.2016-06.io.spdk:cnode4593", 00:09:41.236 "tgt_name": "foobar", 00:09:41.236 "method": "nvmf_create_subsystem", 00:09:41.236 "req_id": 1 00:09:41.236 } 00:09:41.236 Got JSON-RPC error response 00:09:41.236 response: 00:09:41.236 { 00:09:41.236 "code": -32603, 00:09:41.236 "message": "Unable to find target foobar" 00:09:41.236 }' 00:09:41.236 20:41:05 -- target/invalid.sh@41 -- # [[ request: 00:09:41.236 { 00:09:41.236 "nqn": "nqn.2016-06.io.spdk:cnode4593", 00:09:41.236 "tgt_name": "foobar", 00:09:41.236 "method": "nvmf_create_subsystem", 00:09:41.236 "req_id": 1 00:09:41.236 } 00:09:41.236 Got JSON-RPC error response 00:09:41.236 response: 00:09:41.236 { 00:09:41.236 "code": -32603, 00:09:41.236 "message": "Unable to find target foobar" 00:09:41.236 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:41.236 20:41:05 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:41.236 20:41:05 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20753 00:09:41.497 [2024-04-24 20:41:05.986820] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20753: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:41.497 20:41:06 -- target/invalid.sh@45 -- # out='request: 00:09:41.497 { 00:09:41.497 "nqn": "nqn.2016-06.io.spdk:cnode20753", 00:09:41.497 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:41.497 "method": "nvmf_create_subsystem", 00:09:41.497 "req_id": 1 00:09:41.497 } 00:09:41.497 Got JSON-RPC error response 00:09:41.497 response: 00:09:41.497 { 00:09:41.497 "code": -32602, 00:09:41.497 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:41.497 }' 00:09:41.497 20:41:06 -- target/invalid.sh@46 -- # [[ request: 00:09:41.497 { 00:09:41.497 "nqn": "nqn.2016-06.io.spdk:cnode20753", 00:09:41.497 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:41.497 "method": "nvmf_create_subsystem", 00:09:41.497 "req_id": 1 00:09:41.497 } 00:09:41.497 Got JSON-RPC error response 00:09:41.497 response: 00:09:41.497 { 00:09:41.497 "code": -32602, 00:09:41.497 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:41.497 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:41.497 20:41:06 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:41.497 20:41:06 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode931 00:09:41.757 [2024-04-24 20:41:06.211626] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode931: invalid model number 'SPDK_Controller' 00:09:41.757 20:41:06 -- target/invalid.sh@50 -- # out='request: 00:09:41.757 { 00:09:41.757 "nqn": "nqn.2016-06.io.spdk:cnode931", 00:09:41.757 "model_number": "SPDK_Controller\u001f", 00:09:41.757 "method": "nvmf_create_subsystem", 00:09:41.757 "req_id": 1 00:09:41.757 } 00:09:41.757 Got JSON-RPC error response 00:09:41.757 response: 00:09:41.757 { 00:09:41.757 "code": -32602, 00:09:41.757 "message": "Invalid MN SPDK_Controller\u001f" 00:09:41.758 }' 00:09:41.758 20:41:06 -- target/invalid.sh@51 -- # [[ request: 00:09:41.758 { 00:09:41.758 "nqn": "nqn.2016-06.io.spdk:cnode931", 00:09:41.758 "model_number": "SPDK_Controller\u001f", 00:09:41.758 "method": "nvmf_create_subsystem", 00:09:41.758 "req_id": 1 00:09:41.758 } 00:09:41.758 Got JSON-RPC error response 00:09:41.758 response: 00:09:41.758 { 00:09:41.758 "code": -32602, 00:09:41.758 "message": "Invalid MN SPDK_Controller\u001f" 00:09:41.758 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:41.758 20:41:06 -- target/invalid.sh@54 -- # gen_random_s 21 00:09:41.758 20:41:06 -- target/invalid.sh@19 -- # local length=21 ll 00:09:41.758 20:41:06 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:41.758 20:41:06 -- target/invalid.sh@21 -- # local chars 00:09:41.758 20:41:06 -- target/invalid.sh@22 -- # local string 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 126 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+='~' 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 51 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x33' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=3 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 110 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=n 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 37 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=% 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 71 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x47' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=G 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 83 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=S 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 88 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=X 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 85 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=U 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 97 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x61' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=a 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 48 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x30' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=0 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 64 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x40' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=@ 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 100 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=d 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 95 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=_ 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 36 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+='$' 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 85 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=U 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 38 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x26' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+='&' 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 96 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+='`' 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 35 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+='#' 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 65 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+=A 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:41.758 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # printf %x 35 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:41.758 20:41:06 -- target/invalid.sh@25 -- # string+='#' 00:09:42.019 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.019 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.019 20:41:06 -- target/invalid.sh@25 -- # printf %x 76 00:09:42.019 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:42.019 20:41:06 -- target/invalid.sh@25 -- # string+=L 00:09:42.019 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.019 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.019 20:41:06 -- target/invalid.sh@28 -- # [[ ~ == \- ]] 00:09:42.019 20:41:06 -- target/invalid.sh@31 -- # echo '~3n%GSXUa0@d_$U&`#A#L' 00:09:42.019 20:41:06 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '~3n%GSXUa0@d_$U&`#A#L' nqn.2016-06.io.spdk:cnode5616 00:09:42.019 [2024-04-24 20:41:06.596881] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5616: invalid serial number '~3n%GSXUa0@d_$U&`#A#L' 00:09:42.019 20:41:06 -- target/invalid.sh@54 -- # out='request: 00:09:42.019 { 00:09:42.019 "nqn": "nqn.2016-06.io.spdk:cnode5616", 00:09:42.019 "serial_number": "~3n%GSXUa0@d_$U&`#A#L", 00:09:42.019 "method": "nvmf_create_subsystem", 00:09:42.019 "req_id": 1 00:09:42.019 } 00:09:42.019 Got JSON-RPC error response 00:09:42.019 response: 00:09:42.019 { 00:09:42.019 "code": -32602, 00:09:42.019 "message": "Invalid SN ~3n%GSXUa0@d_$U&`#A#L" 00:09:42.019 }' 00:09:42.019 20:41:06 -- target/invalid.sh@55 -- # [[ request: 00:09:42.019 { 00:09:42.019 "nqn": "nqn.2016-06.io.spdk:cnode5616", 00:09:42.019 "serial_number": "~3n%GSXUa0@d_$U&`#A#L", 00:09:42.019 "method": "nvmf_create_subsystem", 00:09:42.019 "req_id": 1 00:09:42.019 } 00:09:42.019 Got JSON-RPC error response 00:09:42.019 response: 00:09:42.019 { 00:09:42.019 "code": -32602, 00:09:42.019 "message": "Invalid SN ~3n%GSXUa0@d_$U&`#A#L" 00:09:42.019 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:42.019 20:41:06 -- target/invalid.sh@58 -- # gen_random_s 41 00:09:42.019 20:41:06 -- target/invalid.sh@19 -- # local length=41 ll 00:09:42.019 20:41:06 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:42.019 20:41:06 -- target/invalid.sh@21 -- # local chars 00:09:42.019 20:41:06 -- target/invalid.sh@22 -- # local string 00:09:42.019 20:41:06 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:42.019 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.019 20:41:06 -- target/invalid.sh@25 -- # printf %x 127 00:09:42.019 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:42.019 20:41:06 -- target/invalid.sh@25 -- # string+=$'\177' 00:09:42.019 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.019 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.019 20:41:06 -- target/invalid.sh@25 -- # printf %x 114 00:09:42.019 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:42.019 20:41:06 -- target/invalid.sh@25 -- # string+=r 00:09:42.019 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.019 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.019 20:41:06 -- target/invalid.sh@25 -- # printf %x 100 00:09:42.019 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x64' 00:09:42.019 20:41:06 -- target/invalid.sh@25 -- # string+=d 00:09:42.019 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.019 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # printf %x 44 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # string+=, 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # printf %x 32 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # string+=' ' 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # printf %x 83 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x53' 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # string+=S 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # printf %x 115 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x73' 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # string+=s 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # printf %x 85 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x55' 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # string+=U 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # printf %x 72 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # string+=H 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # printf %x 88 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x58' 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # string+=X 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # printf %x 77 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # string+=M 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # printf %x 118 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x76' 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # string+=v 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # printf %x 104 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # string+=h 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.280 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # printf %x 107 00:09:42.280 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=k 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 106 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=j 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 82 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=R 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 98 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x62' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=b 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 52 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=4 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 69 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x45' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=E 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 34 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x22' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+='"' 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 32 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x20' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=' ' 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 63 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+='?' 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 96 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+='`' 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 37 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x25' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=% 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 127 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=$'\177' 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 60 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+='<' 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 55 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x37' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=7 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 117 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x75' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=u 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 39 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=\' 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 41 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x29' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=')' 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 102 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=f 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 45 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=- 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 46 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=. 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 124 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+='|' 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 113 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=q 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 76 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=L 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 105 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=i 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 72 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=H 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # printf %x 121 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x79' 00:09:42.281 20:41:06 -- target/invalid.sh@25 -- # string+=y 00:09:42.281 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.542 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.542 20:41:06 -- target/invalid.sh@25 -- # printf %x 104 00:09:42.542 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x68' 00:09:42.542 20:41:06 -- target/invalid.sh@25 -- # string+=h 00:09:42.542 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.542 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.542 20:41:06 -- target/invalid.sh@25 -- # printf %x 43 00:09:42.542 20:41:06 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:09:42.542 20:41:06 -- target/invalid.sh@25 -- # string+=+ 00:09:42.542 20:41:06 -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.542 20:41:06 -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.542 20:41:06 -- target/invalid.sh@28 -- # [[  == \- ]] 00:09:42.542 20:41:06 -- target/invalid.sh@31 -- # echo 'rd, SsUHXMvhkjRb4E" ?`%<7u'\'')f-.|qLiHyh+' 00:09:42.542 20:41:06 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'rd, SsUHXMvhkjRb4E" ?`%<7u'\'')f-.|qLiHyh+' nqn.2016-06.io.spdk:cnode31000 00:09:42.542 [2024-04-24 20:41:07.122608] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31000: invalid model number 'rd, SsUHXMvhkjRb4E" ?`%<7u')f-.|qLiHyh+' 00:09:42.542 20:41:07 -- target/invalid.sh@58 -- # out='request: 00:09:42.542 { 00:09:42.542 "nqn": "nqn.2016-06.io.spdk:cnode31000", 00:09:42.542 "model_number": "\u007frd, SsUHXMvhkjRb4E\" ?`%\u007f<7u'\'')f-.|qLiHyh+", 00:09:42.542 "method": "nvmf_create_subsystem", 00:09:42.542 "req_id": 1 00:09:42.542 } 00:09:42.542 Got JSON-RPC error response 00:09:42.542 response: 00:09:42.542 { 00:09:42.542 "code": -32602, 00:09:42.542 "message": "Invalid MN \u007frd, SsUHXMvhkjRb4E\" ?`%\u007f<7u'\'')f-.|qLiHyh+" 00:09:42.542 }' 00:09:42.542 20:41:07 -- target/invalid.sh@59 -- # [[ request: 00:09:42.542 { 00:09:42.542 "nqn": "nqn.2016-06.io.spdk:cnode31000", 00:09:42.542 "model_number": "\u007frd, SsUHXMvhkjRb4E\" ?`%\u007f<7u')f-.|qLiHyh+", 00:09:42.542 "method": "nvmf_create_subsystem", 00:09:42.542 "req_id": 1 00:09:42.542 } 00:09:42.542 Got JSON-RPC error response 00:09:42.542 response: 00:09:42.542 { 00:09:42.542 "code": -32602, 00:09:42.542 "message": "Invalid MN \u007frd, SsUHXMvhkjRb4E\" ?`%\u007f<7u')f-.|qLiHyh+" 00:09:42.542 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:42.542 20:41:07 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:09:42.803 [2024-04-24 20:41:07.339355] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:42.803 20:41:07 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:43.063 20:41:07 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:09:43.063 20:41:07 -- target/invalid.sh@67 -- # echo '' 00:09:43.063 20:41:07 -- target/invalid.sh@67 -- # head -n 1 00:09:43.063 20:41:07 -- target/invalid.sh@67 -- # IP= 00:09:43.063 20:41:07 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:09:43.323 [2024-04-24 20:41:07.784775] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:43.323 20:41:07 -- target/invalid.sh@69 -- # out='request: 00:09:43.323 { 00:09:43.323 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:43.323 "listen_address": { 00:09:43.324 "trtype": "tcp", 00:09:43.324 "traddr": "", 00:09:43.324 "trsvcid": "4421" 00:09:43.324 }, 00:09:43.324 "method": "nvmf_subsystem_remove_listener", 00:09:43.324 "req_id": 1 00:09:43.324 } 00:09:43.324 Got JSON-RPC error response 00:09:43.324 response: 00:09:43.324 { 00:09:43.324 "code": -32602, 00:09:43.324 "message": "Invalid parameters" 00:09:43.324 }' 00:09:43.324 20:41:07 -- target/invalid.sh@70 -- # [[ request: 00:09:43.324 { 00:09:43.324 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:43.324 "listen_address": { 00:09:43.324 "trtype": "tcp", 00:09:43.324 "traddr": "", 00:09:43.324 "trsvcid": "4421" 00:09:43.324 }, 00:09:43.324 "method": "nvmf_subsystem_remove_listener", 00:09:43.324 "req_id": 1 00:09:43.324 } 00:09:43.324 Got JSON-RPC error response 00:09:43.324 response: 00:09:43.324 { 00:09:43.324 "code": -32602, 00:09:43.324 "message": "Invalid parameters" 00:09:43.324 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:43.324 20:41:07 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7239 -i 0 00:09:43.595 [2024-04-24 20:41:08.001396] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7239: invalid cntlid range [0-65519] 00:09:43.595 20:41:08 -- target/invalid.sh@73 -- # out='request: 00:09:43.595 { 00:09:43.595 "nqn": "nqn.2016-06.io.spdk:cnode7239", 00:09:43.595 "min_cntlid": 0, 00:09:43.595 "method": "nvmf_create_subsystem", 00:09:43.595 "req_id": 1 00:09:43.595 } 00:09:43.595 Got JSON-RPC error response 00:09:43.595 response: 00:09:43.595 { 00:09:43.595 "code": -32602, 00:09:43.595 "message": "Invalid cntlid range [0-65519]" 00:09:43.595 }' 00:09:43.595 20:41:08 -- target/invalid.sh@74 -- # [[ request: 00:09:43.595 { 00:09:43.595 "nqn": "nqn.2016-06.io.spdk:cnode7239", 00:09:43.595 "min_cntlid": 0, 00:09:43.595 "method": "nvmf_create_subsystem", 00:09:43.595 "req_id": 1 00:09:43.595 } 00:09:43.595 Got JSON-RPC error response 00:09:43.595 response: 00:09:43.595 { 00:09:43.595 "code": -32602, 00:09:43.595 "message": "Invalid cntlid range [0-65519]" 00:09:43.595 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:43.595 20:41:08 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11775 -i 65520 00:09:43.595 [2024-04-24 20:41:08.218115] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11775: invalid cntlid range [65520-65519] 00:09:43.858 20:41:08 -- target/invalid.sh@75 -- # out='request: 00:09:43.858 { 00:09:43.858 "nqn": "nqn.2016-06.io.spdk:cnode11775", 00:09:43.858 "min_cntlid": 65520, 00:09:43.858 "method": "nvmf_create_subsystem", 00:09:43.858 "req_id": 1 00:09:43.858 } 00:09:43.858 Got JSON-RPC error response 00:09:43.858 response: 00:09:43.858 { 00:09:43.858 "code": -32602, 00:09:43.858 "message": "Invalid cntlid range [65520-65519]" 00:09:43.858 }' 00:09:43.858 20:41:08 -- target/invalid.sh@76 -- # [[ request: 00:09:43.858 { 00:09:43.858 "nqn": "nqn.2016-06.io.spdk:cnode11775", 00:09:43.858 "min_cntlid": 65520, 00:09:43.858 "method": "nvmf_create_subsystem", 00:09:43.858 "req_id": 1 00:09:43.858 } 00:09:43.858 Got JSON-RPC error response 00:09:43.858 response: 00:09:43.858 { 00:09:43.858 "code": -32602, 00:09:43.858 "message": "Invalid cntlid range [65520-65519]" 00:09:43.858 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:43.858 20:41:08 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27989 -I 0 00:09:43.858 [2024-04-24 20:41:08.430836] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27989: invalid cntlid range [1-0] 00:09:43.858 20:41:08 -- target/invalid.sh@77 -- # out='request: 00:09:43.858 { 00:09:43.858 "nqn": "nqn.2016-06.io.spdk:cnode27989", 00:09:43.858 "max_cntlid": 0, 00:09:43.858 "method": "nvmf_create_subsystem", 00:09:43.858 "req_id": 1 00:09:43.858 } 00:09:43.858 Got JSON-RPC error response 00:09:43.858 response: 00:09:43.858 { 00:09:43.858 "code": -32602, 00:09:43.858 "message": "Invalid cntlid range [1-0]" 00:09:43.858 }' 00:09:43.858 20:41:08 -- target/invalid.sh@78 -- # [[ request: 00:09:43.858 { 00:09:43.858 "nqn": "nqn.2016-06.io.spdk:cnode27989", 00:09:43.858 "max_cntlid": 0, 00:09:43.858 "method": "nvmf_create_subsystem", 00:09:43.858 "req_id": 1 00:09:43.858 } 00:09:43.858 Got JSON-RPC error response 00:09:43.858 response: 00:09:43.858 { 00:09:43.858 "code": -32602, 00:09:43.858 "message": "Invalid cntlid range [1-0]" 00:09:43.858 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:43.858 20:41:08 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21588 -I 65520 00:09:44.119 [2024-04-24 20:41:08.647547] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21588: invalid cntlid range [1-65520] 00:09:44.119 20:41:08 -- target/invalid.sh@79 -- # out='request: 00:09:44.119 { 00:09:44.119 "nqn": "nqn.2016-06.io.spdk:cnode21588", 00:09:44.119 "max_cntlid": 65520, 00:09:44.119 "method": "nvmf_create_subsystem", 00:09:44.119 "req_id": 1 00:09:44.119 } 00:09:44.119 Got JSON-RPC error response 00:09:44.119 response: 00:09:44.119 { 00:09:44.119 "code": -32602, 00:09:44.119 "message": "Invalid cntlid range [1-65520]" 00:09:44.119 }' 00:09:44.119 20:41:08 -- target/invalid.sh@80 -- # [[ request: 00:09:44.119 { 00:09:44.119 "nqn": "nqn.2016-06.io.spdk:cnode21588", 00:09:44.119 "max_cntlid": 65520, 00:09:44.119 "method": "nvmf_create_subsystem", 00:09:44.119 "req_id": 1 00:09:44.119 } 00:09:44.119 Got JSON-RPC error response 00:09:44.119 response: 00:09:44.119 { 00:09:44.119 "code": -32602, 00:09:44.119 "message": "Invalid cntlid range [1-65520]" 00:09:44.119 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:44.119 20:41:08 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25098 -i 6 -I 5 00:09:44.380 [2024-04-24 20:41:08.864278] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25098: invalid cntlid range [6-5] 00:09:44.380 20:41:08 -- target/invalid.sh@83 -- # out='request: 00:09:44.380 { 00:09:44.380 "nqn": "nqn.2016-06.io.spdk:cnode25098", 00:09:44.380 "min_cntlid": 6, 00:09:44.380 "max_cntlid": 5, 00:09:44.380 "method": "nvmf_create_subsystem", 00:09:44.380 "req_id": 1 00:09:44.380 } 00:09:44.380 Got JSON-RPC error response 00:09:44.380 response: 00:09:44.380 { 00:09:44.380 "code": -32602, 00:09:44.380 "message": "Invalid cntlid range [6-5]" 00:09:44.380 }' 00:09:44.380 20:41:08 -- target/invalid.sh@84 -- # [[ request: 00:09:44.380 { 00:09:44.380 "nqn": "nqn.2016-06.io.spdk:cnode25098", 00:09:44.380 "min_cntlid": 6, 00:09:44.380 "max_cntlid": 5, 00:09:44.380 "method": "nvmf_create_subsystem", 00:09:44.380 "req_id": 1 00:09:44.380 } 00:09:44.380 Got JSON-RPC error response 00:09:44.380 response: 00:09:44.380 { 00:09:44.380 "code": -32602, 00:09:44.380 "message": "Invalid cntlid range [6-5]" 00:09:44.380 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:44.380 20:41:08 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:44.380 20:41:09 -- target/invalid.sh@87 -- # out='request: 00:09:44.380 { 00:09:44.380 "name": "foobar", 00:09:44.380 "method": "nvmf_delete_target", 00:09:44.380 "req_id": 1 00:09:44.380 } 00:09:44.380 Got JSON-RPC error response 00:09:44.380 response: 00:09:44.380 { 00:09:44.380 "code": -32602, 00:09:44.380 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:44.380 }' 00:09:44.380 20:41:09 -- target/invalid.sh@88 -- # [[ request: 00:09:44.380 { 00:09:44.380 "name": "foobar", 00:09:44.380 "method": "nvmf_delete_target", 00:09:44.380 "req_id": 1 00:09:44.380 } 00:09:44.380 Got JSON-RPC error response 00:09:44.380 response: 00:09:44.380 { 00:09:44.380 "code": -32602, 00:09:44.380 "message": "The specified target doesn't exist, cannot delete it." 00:09:44.380 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:44.380 20:41:09 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:44.380 20:41:09 -- target/invalid.sh@91 -- # nvmftestfini 00:09:44.380 20:41:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:44.380 20:41:09 -- nvmf/common.sh@117 -- # sync 00:09:44.641 20:41:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.641 20:41:09 -- nvmf/common.sh@120 -- # set +e 00:09:44.641 20:41:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.641 20:41:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.641 rmmod nvme_tcp 00:09:44.641 rmmod nvme_fabrics 00:09:44.641 rmmod nvme_keyring 00:09:44.641 20:41:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.641 20:41:09 -- nvmf/common.sh@124 -- # set -e 00:09:44.641 20:41:09 -- nvmf/common.sh@125 -- # return 0 00:09:44.641 20:41:09 -- nvmf/common.sh@478 -- # '[' -n 2645911 ']' 00:09:44.641 20:41:09 -- nvmf/common.sh@479 -- # killprocess 2645911 00:09:44.641 20:41:09 -- common/autotest_common.sh@936 -- # '[' -z 2645911 ']' 00:09:44.641 20:41:09 -- common/autotest_common.sh@940 -- # kill -0 2645911 00:09:44.641 20:41:09 -- common/autotest_common.sh@941 -- # uname 00:09:44.641 20:41:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:44.641 20:41:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2645911 00:09:44.641 20:41:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:44.641 20:41:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:44.641 20:41:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2645911' 00:09:44.641 killing process with pid 2645911 00:09:44.641 20:41:09 -- common/autotest_common.sh@955 -- # kill 2645911 00:09:44.641 20:41:09 -- common/autotest_common.sh@960 -- # wait 2645911 00:09:44.902 20:41:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:44.902 20:41:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:44.902 20:41:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:44.902 20:41:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.902 20:41:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.902 20:41:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.902 20:41:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.902 20:41:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.818 20:41:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:46.818 00:09:46.818 real 0m13.974s 00:09:46.818 user 0m22.388s 00:09:46.818 sys 0m6.449s 00:09:46.818 20:41:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:46.818 20:41:11 -- common/autotest_common.sh@10 -- # set +x 00:09:46.818 ************************************ 00:09:46.818 END TEST nvmf_invalid 00:09:46.818 ************************************ 00:09:46.818 20:41:11 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:46.818 20:41:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:46.818 20:41:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:46.818 20:41:11 -- common/autotest_common.sh@10 -- # set +x 00:09:47.080 ************************************ 00:09:47.080 START TEST nvmf_abort 00:09:47.080 ************************************ 00:09:47.080 20:41:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:47.080 * Looking for test storage... 00:09:47.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.080 20:41:11 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.080 20:41:11 -- nvmf/common.sh@7 -- # uname -s 00:09:47.080 20:41:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.080 20:41:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.080 20:41:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.080 20:41:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.080 20:41:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.080 20:41:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.080 20:41:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.080 20:41:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.080 20:41:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.080 20:41:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.080 20:41:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:47.080 20:41:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:47.080 20:41:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.080 20:41:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.080 20:41:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.080 20:41:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.080 20:41:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.080 20:41:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.080 20:41:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.080 20:41:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.080 20:41:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.080 20:41:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.080 20:41:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.080 20:41:11 -- paths/export.sh@5 -- # export PATH 00:09:47.080 20:41:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.080 20:41:11 -- nvmf/common.sh@47 -- # : 0 00:09:47.080 20:41:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.080 20:41:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.080 20:41:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.080 20:41:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.080 20:41:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.080 20:41:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.080 20:41:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.080 20:41:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.080 20:41:11 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:47.080 20:41:11 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:47.080 20:41:11 -- target/abort.sh@14 -- # nvmftestinit 00:09:47.080 20:41:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:47.080 20:41:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.080 20:41:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:47.080 20:41:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:47.080 20:41:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:47.080 20:41:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.080 20:41:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.080 20:41:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.080 20:41:11 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:47.080 20:41:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:47.080 20:41:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.080 20:41:11 -- common/autotest_common.sh@10 -- # set +x 00:09:55.232 20:41:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:55.232 20:41:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:09:55.232 20:41:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:55.232 20:41:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:55.232 20:41:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:55.232 20:41:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:55.232 20:41:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:55.232 20:41:18 -- nvmf/common.sh@295 -- # net_devs=() 00:09:55.232 20:41:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:55.232 20:41:18 -- nvmf/common.sh@296 -- # e810=() 00:09:55.232 20:41:18 -- nvmf/common.sh@296 -- # local -ga e810 00:09:55.232 20:41:18 -- nvmf/common.sh@297 -- # x722=() 00:09:55.232 20:41:18 -- nvmf/common.sh@297 -- # local -ga x722 00:09:55.232 20:41:18 -- nvmf/common.sh@298 -- # mlx=() 00:09:55.232 20:41:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:09:55.232 20:41:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.232 20:41:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.232 20:41:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.232 20:41:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.232 20:41:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.232 20:41:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.232 20:41:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.232 20:41:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.232 20:41:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.232 20:41:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.232 20:41:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.232 20:41:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:55.232 20:41:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:55.232 20:41:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:55.232 20:41:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.232 20:41:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:55.232 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:55.232 20:41:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.232 20:41:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:55.232 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:55.232 20:41:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:55.232 20:41:18 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:55.232 20:41:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.232 20:41:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.232 20:41:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:55.232 20:41:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.233 20:41:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:55.233 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:55.233 20:41:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.233 20:41:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.233 20:41:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.233 20:41:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:09:55.233 20:41:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.233 20:41:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:55.233 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:55.233 20:41:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.233 20:41:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:09:55.233 20:41:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:09:55.233 20:41:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:09:55.233 20:41:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:09:55.233 20:41:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:09:55.233 20:41:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.233 20:41:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.233 20:41:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.233 20:41:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:55.233 20:41:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.233 20:41:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.233 20:41:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:55.233 20:41:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.233 20:41:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.233 20:41:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:55.233 20:41:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:55.233 20:41:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.233 20:41:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.233 20:41:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.233 20:41:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.233 20:41:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:55.233 20:41:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.233 20:41:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.233 20:41:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.233 20:41:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:55.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:09:55.233 00:09:55.233 --- 10.0.0.2 ping statistics --- 00:09:55.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.233 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:09:55.233 20:41:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:09:55.233 00:09:55.233 --- 10.0.0.1 ping statistics --- 00:09:55.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.233 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:09:55.233 20:41:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.233 20:41:18 -- nvmf/common.sh@411 -- # return 0 00:09:55.233 20:41:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:55.233 20:41:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.233 20:41:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:55.233 20:41:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:55.233 20:41:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.233 20:41:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:55.233 20:41:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:55.233 20:41:19 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:55.233 20:41:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:55.233 20:41:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:55.233 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:09:55.233 20:41:19 -- nvmf/common.sh@470 -- # nvmfpid=2651104 00:09:55.233 20:41:19 -- nvmf/common.sh@471 -- # waitforlisten 2651104 00:09:55.233 20:41:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:55.233 20:41:19 -- common/autotest_common.sh@817 -- # '[' -z 2651104 ']' 00:09:55.233 20:41:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.233 20:41:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:55.233 20:41:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.233 20:41:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:55.233 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:09:55.233 [2024-04-24 20:41:19.066110] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:09:55.233 [2024-04-24 20:41:19.066174] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.233 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.233 [2024-04-24 20:41:19.140115] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.233 [2024-04-24 20:41:19.212614] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.233 [2024-04-24 20:41:19.212656] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.233 [2024-04-24 20:41:19.212664] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.233 [2024-04-24 20:41:19.212672] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.233 [2024-04-24 20:41:19.212678] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.233 [2024-04-24 20:41:19.212803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.233 [2024-04-24 20:41:19.213115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.233 [2024-04-24 20:41:19.213116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.233 20:41:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:55.233 20:41:19 -- common/autotest_common.sh@850 -- # return 0 00:09:55.233 20:41:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:55.233 20:41:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:55.233 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:09:55.233 20:41:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.233 20:41:19 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:55.233 20:41:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.233 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:09:55.233 [2024-04-24 20:41:19.347503] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:55.233 20:41:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.233 20:41:19 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:55.233 20:41:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.233 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:09:55.233 Malloc0 00:09:55.233 20:41:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.233 20:41:19 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:55.233 20:41:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.233 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:09:55.233 Delay0 00:09:55.233 20:41:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.233 20:41:19 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:55.233 20:41:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.233 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:09:55.233 20:41:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.233 20:41:19 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:55.233 20:41:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.233 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:09:55.233 20:41:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.233 20:41:19 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:55.233 20:41:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.233 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:09:55.233 [2024-04-24 20:41:19.423304] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:55.233 20:41:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.233 20:41:19 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:55.233 20:41:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.233 20:41:19 -- common/autotest_common.sh@10 -- # set +x 00:09:55.233 20:41:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.233 20:41:19 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:55.233 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.233 [2024-04-24 20:41:19.595890] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:57.165 Initializing NVMe Controllers 00:09:57.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:57.165 controller IO queue size 128 less than required 00:09:57.165 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:57.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:57.165 Initialization complete. Launching workers. 00:09:57.165 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 34143 00:09:57.165 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 34205, failed to submit 62 00:09:57.165 success 34147, unsuccess 58, failed 0 00:09:57.165 20:41:21 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:57.165 20:41:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:57.165 20:41:21 -- common/autotest_common.sh@10 -- # set +x 00:09:57.165 20:41:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:57.165 20:41:21 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:57.165 20:41:21 -- target/abort.sh@38 -- # nvmftestfini 00:09:57.165 20:41:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:57.165 20:41:21 -- nvmf/common.sh@117 -- # sync 00:09:57.165 20:41:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.165 20:41:21 -- nvmf/common.sh@120 -- # set +e 00:09:57.165 20:41:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.165 20:41:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.165 rmmod nvme_tcp 00:09:57.443 rmmod nvme_fabrics 00:09:57.443 rmmod nvme_keyring 00:09:57.443 20:41:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.443 20:41:21 -- nvmf/common.sh@124 -- # set -e 00:09:57.443 20:41:21 -- nvmf/common.sh@125 -- # return 0 00:09:57.443 20:41:21 -- nvmf/common.sh@478 -- # '[' -n 2651104 ']' 00:09:57.443 20:41:21 -- nvmf/common.sh@479 -- # killprocess 2651104 00:09:57.443 20:41:21 -- common/autotest_common.sh@936 -- # '[' -z 2651104 ']' 00:09:57.443 20:41:21 -- common/autotest_common.sh@940 -- # kill -0 2651104 00:09:57.443 20:41:21 -- common/autotest_common.sh@941 -- # uname 00:09:57.443 20:41:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:57.443 20:41:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2651104 00:09:57.443 20:41:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:57.443 20:41:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:57.443 20:41:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2651104' 00:09:57.443 killing process with pid 2651104 00:09:57.443 20:41:21 -- common/autotest_common.sh@955 -- # kill 2651104 00:09:57.443 20:41:21 -- common/autotest_common.sh@960 -- # wait 2651104 00:09:57.443 20:41:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:57.443 20:41:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:57.443 20:41:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:57.443 20:41:22 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:57.443 20:41:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:57.443 20:41:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.443 20:41:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:57.443 20:41:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.991 20:41:24 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:59.991 00:09:59.991 real 0m12.584s 00:09:59.991 user 0m12.360s 00:09:59.991 sys 0m6.319s 00:09:59.991 20:41:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:59.991 20:41:24 -- common/autotest_common.sh@10 -- # set +x 00:09:59.991 ************************************ 00:09:59.991 END TEST nvmf_abort 00:09:59.991 ************************************ 00:09:59.991 20:41:24 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:59.991 20:41:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:59.991 20:41:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:59.991 20:41:24 -- common/autotest_common.sh@10 -- # set +x 00:09:59.991 ************************************ 00:09:59.991 START TEST nvmf_ns_hotplug_stress 00:09:59.991 ************************************ 00:09:59.991 20:41:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:59.991 * Looking for test storage... 00:09:59.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:59.991 20:41:24 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:59.991 20:41:24 -- nvmf/common.sh@7 -- # uname -s 00:09:59.991 20:41:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:59.991 20:41:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:59.991 20:41:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:59.991 20:41:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:59.992 20:41:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:59.992 20:41:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:59.992 20:41:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:59.992 20:41:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:59.992 20:41:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:59.992 20:41:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:59.992 20:41:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:59.992 20:41:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:09:59.992 20:41:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:59.992 20:41:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:59.992 20:41:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:59.992 20:41:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:59.992 20:41:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:59.992 20:41:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:59.992 20:41:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:59.992 20:41:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:59.992 20:41:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.992 20:41:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.992 20:41:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.992 20:41:24 -- paths/export.sh@5 -- # export PATH 00:09:59.992 20:41:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:59.992 20:41:24 -- nvmf/common.sh@47 -- # : 0 00:09:59.992 20:41:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:59.992 20:41:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:59.992 20:41:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:59.992 20:41:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:59.992 20:41:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:59.992 20:41:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:59.992 20:41:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:59.992 20:41:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:59.992 20:41:24 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:59.992 20:41:24 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:09:59.992 20:41:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:59.992 20:41:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:59.992 20:41:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:59.992 20:41:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:59.992 20:41:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:59.992 20:41:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.992 20:41:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:59.992 20:41:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:59.992 20:41:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:09:59.992 20:41:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:09:59.992 20:41:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:09:59.992 20:41:24 -- common/autotest_common.sh@10 -- # set +x 00:10:06.585 20:41:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:06.585 20:41:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.585 20:41:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.585 20:41:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.585 20:41:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.585 20:41:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.585 20:41:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.585 20:41:31 -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.585 20:41:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.585 20:41:31 -- nvmf/common.sh@296 -- # e810=() 00:10:06.585 20:41:31 -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.585 20:41:31 -- nvmf/common.sh@297 -- # x722=() 00:10:06.585 20:41:31 -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.585 20:41:31 -- nvmf/common.sh@298 -- # mlx=() 00:10:06.585 20:41:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.585 20:41:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.585 20:41:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.585 20:41:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.585 20:41:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.585 20:41:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.585 20:41:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.585 20:41:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.585 20:41:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.585 20:41:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.585 20:41:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.585 20:41:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.585 20:41:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.585 20:41:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.585 20:41:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.585 20:41:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.585 20:41:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:06.585 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:06.585 20:41:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.585 20:41:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:06.585 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:06.585 20:41:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.585 20:41:31 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.585 20:41:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.585 20:41:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:06.585 20:41:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.585 20:41:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:06.585 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:06.585 20:41:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.585 20:41:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.585 20:41:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.585 20:41:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:06.585 20:41:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.585 20:41:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:06.585 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:06.585 20:41:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.585 20:41:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:06.585 20:41:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:06.585 20:41:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:06.585 20:41:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:06.585 20:41:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.585 20:41:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.585 20:41:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.585 20:41:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.585 20:41:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.585 20:41:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.585 20:41:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.585 20:41:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.585 20:41:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.585 20:41:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.585 20:41:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.585 20:41:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.585 20:41:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.585 20:41:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.585 20:41:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.585 20:41:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.585 20:41:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.846 20:41:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.846 20:41:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.846 20:41:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:10:06.846 00:10:06.846 --- 10.0.0.2 ping statistics --- 00:10:06.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.846 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:10:06.846 20:41:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:10:06.846 00:10:06.846 --- 10.0.0.1 ping statistics --- 00:10:06.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.847 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:10:06.847 20:41:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.847 20:41:31 -- nvmf/common.sh@411 -- # return 0 00:10:06.847 20:41:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:06.847 20:41:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.847 20:41:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:06.847 20:41:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:06.847 20:41:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.847 20:41:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:06.847 20:41:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:06.847 20:41:31 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:10:06.847 20:41:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:06.847 20:41:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:06.847 20:41:31 -- common/autotest_common.sh@10 -- # set +x 00:10:06.847 20:41:31 -- nvmf/common.sh@470 -- # nvmfpid=2655847 00:10:06.847 20:41:31 -- nvmf/common.sh@471 -- # waitforlisten 2655847 00:10:06.847 20:41:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:06.847 20:41:31 -- common/autotest_common.sh@817 -- # '[' -z 2655847 ']' 00:10:06.847 20:41:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.847 20:41:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:06.847 20:41:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.847 20:41:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:06.847 20:41:31 -- common/autotest_common.sh@10 -- # set +x 00:10:06.847 [2024-04-24 20:41:31.446162] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:10:06.847 [2024-04-24 20:41:31.446223] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.847 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.108 [2024-04-24 20:41:31.519553] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.108 [2024-04-24 20:41:31.592647] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.108 [2024-04-24 20:41:31.592690] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.108 [2024-04-24 20:41:31.592698] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.108 [2024-04-24 20:41:31.592704] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.108 [2024-04-24 20:41:31.592710] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.108 [2024-04-24 20:41:31.592888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:07.108 [2024-04-24 20:41:31.593165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:07.108 [2024-04-24 20:41:31.593165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.051 20:41:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:08.051 20:41:32 -- common/autotest_common.sh@850 -- # return 0 00:10:08.051 20:41:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:08.051 20:41:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:08.051 20:41:32 -- common/autotest_common.sh@10 -- # set +x 00:10:08.051 20:41:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.051 20:41:32 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:10:08.051 20:41:32 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:08.051 [2024-04-24 20:41:32.546188] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.051 20:41:32 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:08.311 20:41:32 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.573 [2024-04-24 20:41:32.967838] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.573 20:41:32 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:08.573 20:41:33 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:08.834 Malloc0 00:10:08.834 20:41:33 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:09.095 Delay0 00:10:09.095 20:41:33 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.356 20:41:33 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:09.617 NULL1 00:10:09.617 20:41:34 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:09.877 20:41:34 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=2656483 00:10:09.877 20:41:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:09.877 20:41:34 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:09.877 20:41:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.877 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.265 Read completed with error (sct=0, sc=11) 00:10:11.265 20:41:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.265 20:41:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:10:11.265 20:41:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:11.526 true 00:10:11.526 20:41:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:11.526 20:41:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.470 20:41:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.470 20:41:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:10:12.470 20:41:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:12.731 true 00:10:12.731 20:41:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:12.731 20:41:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.992 20:41:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.992 20:41:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:10:12.992 20:41:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:13.252 true 00:10:13.252 20:41:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:13.252 20:41:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.641 20:41:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.641 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.641 20:41:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:10:14.641 20:41:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:14.641 true 00:10:14.641 20:41:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:14.641 20:41:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.930 20:41:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.191 20:41:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:10:15.191 20:41:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:15.191 true 00:10:15.453 20:41:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:15.453 20:41:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.396 20:41:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.396 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.657 20:41:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:10:16.657 20:41:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:16.917 true 00:10:16.917 20:41:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:16.917 20:41:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.859 20:41:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.859 20:41:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:10:17.859 20:41:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:18.119 true 00:10:18.119 20:41:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:18.119 20:41:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.379 20:41:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.641 20:41:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:10:18.641 20:41:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:18.641 true 00:10:18.641 20:41:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:18.641 20:41:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:20.027 20:41:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.027 20:41:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:10:20.027 20:41:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:20.027 true 00:10:20.027 20:41:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:20.027 20:41:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:20.287 20:41:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:20.548 20:41:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:10:20.548 20:41:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:20.829 true 00:10:20.829 20:41:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:20.829 20:41:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.766 20:41:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.766 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.025 20:41:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:10:22.025 20:41:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:22.286 true 00:10:22.286 20:41:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:22.286 20:41:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.227 20:41:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.227 20:41:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:10:23.227 20:41:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:23.488 true 00:10:23.488 20:41:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:23.488 20:41:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.749 20:41:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.011 20:41:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:10:24.011 20:41:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:24.011 true 00:10:24.011 20:41:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:24.011 20:41:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.953 20:41:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.953 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.213 20:41:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:10:25.213 20:41:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:25.473 true 00:10:25.474 20:41:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:25.474 20:41:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.735 20:41:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.995 20:41:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:10:25.995 20:41:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:25.995 true 00:10:26.255 20:41:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:26.255 20:41:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.198 20:41:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.459 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.459 20:41:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:10:27.459 20:41:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:27.459 true 00:10:27.759 20:41:52 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:27.759 20:41:52 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.330 20:41:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.593 20:41:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:10:28.593 20:41:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:28.857 true 00:10:28.857 20:41:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:28.857 20:41:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.117 20:41:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.377 20:41:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:10:29.377 20:41:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:29.377 true 00:10:29.377 20:41:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:29.377 20:41:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.759 20:41:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.759 20:41:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:10:30.759 20:41:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:31.019 true 00:10:31.019 20:41:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:31.019 20:41:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.960 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.960 20:41:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.960 20:41:56 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:10:31.960 20:41:56 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:32.220 true 00:10:32.220 20:41:56 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:32.221 20:41:56 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.480 20:41:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.480 20:41:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:10:32.480 20:41:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:32.741 true 00:10:32.741 20:41:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:32.741 20:41:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.148 20:41:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:34.148 20:41:58 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:10:34.148 20:41:58 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:34.409 true 00:10:34.409 20:41:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:34.409 20:41:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.400 20:41:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.400 20:41:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:10:35.400 20:41:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:35.660 true 00:10:35.660 20:42:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:35.660 20:42:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.920 20:42:00 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.180 20:42:00 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:10:36.180 20:42:00 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:36.180 true 00:10:36.180 20:42:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:36.180 20:42:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.564 20:42:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.564 20:42:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:10:37.564 20:42:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:37.564 true 00:10:37.825 20:42:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:37.826 20:42:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.826 20:42:02 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:38.087 20:42:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:10:38.087 20:42:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:38.348 true 00:10:38.348 20:42:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:38.348 20:42:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.291 20:42:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:39.553 20:42:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:10:39.553 20:42:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:39.813 true 00:10:39.813 20:42:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:39.813 20:42:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.075 20:42:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.075 Initializing NVMe Controllers 00:10:40.075 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:40.075 Controller IO queue size 128, less than required. 00:10:40.075 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:40.075 Controller IO queue size 128, less than required. 00:10:40.075 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:40.075 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:40.075 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:40.075 Initialization complete. Launching workers. 00:10:40.075 ======================================================== 00:10:40.075 Latency(us) 00:10:40.075 Device Information : IOPS MiB/s Average min max 00:10:40.075 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1575.61 0.77 52825.96 2923.99 1073714.80 00:10:40.075 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18802.16 9.18 6807.36 1426.07 402221.53 00:10:40.075 ======================================================== 00:10:40.075 Total : 20377.76 9.95 10365.52 1426.07 1073714.80 00:10:40.075 00:10:40.337 20:42:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:10:40.337 20:42:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:40.337 true 00:10:40.598 20:42:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2656483 00:10:40.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (2656483) - No such process 00:10:40.598 20:42:04 -- target/ns_hotplug_stress.sh@44 -- # wait 2656483 00:10:40.598 20:42:04 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:40.598 20:42:04 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:10:40.598 20:42:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:40.598 20:42:04 -- nvmf/common.sh@117 -- # sync 00:10:40.598 20:42:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:40.598 20:42:04 -- nvmf/common.sh@120 -- # set +e 00:10:40.598 20:42:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.598 20:42:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:40.598 rmmod nvme_tcp 00:10:40.598 rmmod nvme_fabrics 00:10:40.598 rmmod nvme_keyring 00:10:40.598 20:42:05 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.598 20:42:05 -- nvmf/common.sh@124 -- # set -e 00:10:40.598 20:42:05 -- nvmf/common.sh@125 -- # return 0 00:10:40.598 20:42:05 -- nvmf/common.sh@478 -- # '[' -n 2655847 ']' 00:10:40.598 20:42:05 -- nvmf/common.sh@479 -- # killprocess 2655847 00:10:40.598 20:42:05 -- common/autotest_common.sh@936 -- # '[' -z 2655847 ']' 00:10:40.598 20:42:05 -- common/autotest_common.sh@940 -- # kill -0 2655847 00:10:40.598 20:42:05 -- common/autotest_common.sh@941 -- # uname 00:10:40.598 20:42:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:40.598 20:42:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2655847 00:10:40.598 20:42:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:40.598 20:42:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:40.598 20:42:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2655847' 00:10:40.598 killing process with pid 2655847 00:10:40.598 20:42:05 -- common/autotest_common.sh@955 -- # kill 2655847 00:10:40.598 20:42:05 -- common/autotest_common.sh@960 -- # wait 2655847 00:10:40.897 20:42:05 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:40.897 20:42:05 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:40.897 20:42:05 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:40.897 20:42:05 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:40.897 20:42:05 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:40.897 20:42:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.897 20:42:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:40.897 20:42:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:42.813 20:42:07 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:42.813 00:10:42.813 real 0m43.008s 00:10:42.813 user 2m30.238s 00:10:42.813 sys 0m10.845s 00:10:42.813 20:42:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:42.813 20:42:07 -- common/autotest_common.sh@10 -- # set +x 00:10:42.813 ************************************ 00:10:42.813 END TEST nvmf_ns_hotplug_stress 00:10:42.813 ************************************ 00:10:42.813 20:42:07 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:42.813 20:42:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:42.813 20:42:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:42.813 20:42:07 -- common/autotest_common.sh@10 -- # set +x 00:10:43.075 ************************************ 00:10:43.075 START TEST nvmf_connect_stress 00:10:43.075 ************************************ 00:10:43.075 20:42:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:10:43.075 * Looking for test storage... 00:10:43.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:43.075 20:42:07 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:43.075 20:42:07 -- nvmf/common.sh@7 -- # uname -s 00:10:43.075 20:42:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.075 20:42:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.075 20:42:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.075 20:42:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.075 20:42:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.075 20:42:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.075 20:42:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.075 20:42:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.075 20:42:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.075 20:42:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.075 20:42:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:10:43.075 20:42:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:10:43.075 20:42:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.075 20:42:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.075 20:42:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:43.075 20:42:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.075 20:42:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:43.075 20:42:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.075 20:42:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.075 20:42:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.075 20:42:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.075 20:42:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.075 20:42:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.075 20:42:07 -- paths/export.sh@5 -- # export PATH 00:10:43.075 20:42:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.075 20:42:07 -- nvmf/common.sh@47 -- # : 0 00:10:43.075 20:42:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:43.075 20:42:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:43.075 20:42:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.075 20:42:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.075 20:42:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.075 20:42:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:43.075 20:42:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:43.075 20:42:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:43.075 20:42:07 -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:43.075 20:42:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:43.075 20:42:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.075 20:42:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:43.075 20:42:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:43.075 20:42:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:43.075 20:42:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.075 20:42:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.075 20:42:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.075 20:42:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:10:43.075 20:42:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:43.075 20:42:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:43.075 20:42:07 -- common/autotest_common.sh@10 -- # set +x 00:10:51.224 20:42:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:51.224 20:42:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.224 20:42:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.224 20:42:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.224 20:42:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.224 20:42:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.224 20:42:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.224 20:42:14 -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.224 20:42:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.224 20:42:14 -- nvmf/common.sh@296 -- # e810=() 00:10:51.224 20:42:14 -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.224 20:42:14 -- nvmf/common.sh@297 -- # x722=() 00:10:51.224 20:42:14 -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.224 20:42:14 -- nvmf/common.sh@298 -- # mlx=() 00:10:51.224 20:42:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.224 20:42:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.224 20:42:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.224 20:42:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.224 20:42:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.224 20:42:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.224 20:42:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.224 20:42:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.224 20:42:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.224 20:42:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.224 20:42:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.224 20:42:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.224 20:42:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.224 20:42:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.224 20:42:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.224 20:42:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.224 20:42:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:51.224 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:51.224 20:42:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.224 20:42:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:51.224 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:51.224 20:42:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.224 20:42:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.224 20:42:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.224 20:42:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:51.224 20:42:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.224 20:42:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:51.224 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:51.224 20:42:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.224 20:42:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.224 20:42:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.224 20:42:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:51.224 20:42:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.224 20:42:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:51.224 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:51.224 20:42:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.224 20:42:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:51.224 20:42:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:51.224 20:42:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:51.224 20:42:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:51.224 20:42:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.224 20:42:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.224 20:42:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.224 20:42:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.224 20:42:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.224 20:42:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.224 20:42:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.224 20:42:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.224 20:42:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.224 20:42:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.224 20:42:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.224 20:42:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.224 20:42:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.224 20:42:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.224 20:42:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.224 20:42:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.224 20:42:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.224 20:42:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.224 20:42:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.224 20:42:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:10:51.224 00:10:51.224 --- 10.0.0.2 ping statistics --- 00:10:51.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.225 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:10:51.225 20:42:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:10:51.225 00:10:51.225 --- 10.0.0.1 ping statistics --- 00:10:51.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.225 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:10:51.225 20:42:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.225 20:42:14 -- nvmf/common.sh@411 -- # return 0 00:10:51.225 20:42:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:51.225 20:42:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.225 20:42:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:51.225 20:42:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:51.225 20:42:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.225 20:42:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:51.225 20:42:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:51.225 20:42:14 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:51.225 20:42:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:51.225 20:42:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:51.225 20:42:14 -- common/autotest_common.sh@10 -- # set +x 00:10:51.225 20:42:14 -- nvmf/common.sh@470 -- # nvmfpid=2667484 00:10:51.225 20:42:14 -- nvmf/common.sh@471 -- # waitforlisten 2667484 00:10:51.225 20:42:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:51.225 20:42:14 -- common/autotest_common.sh@817 -- # '[' -z 2667484 ']' 00:10:51.225 20:42:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.225 20:42:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:51.225 20:42:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.225 20:42:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:51.225 20:42:14 -- common/autotest_common.sh@10 -- # set +x 00:10:51.225 [2024-04-24 20:42:14.956892] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:10:51.225 [2024-04-24 20:42:14.956954] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.225 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.225 [2024-04-24 20:42:15.029616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:51.225 [2024-04-24 20:42:15.102173] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.225 [2024-04-24 20:42:15.102212] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.225 [2024-04-24 20:42:15.102220] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.225 [2024-04-24 20:42:15.102230] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.225 [2024-04-24 20:42:15.102236] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.225 [2024-04-24 20:42:15.102346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.225 [2024-04-24 20:42:15.102502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.225 [2024-04-24 20:42:15.102503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.225 20:42:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:51.225 20:42:15 -- common/autotest_common.sh@850 -- # return 0 00:10:51.225 20:42:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:51.225 20:42:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:51.225 20:42:15 -- common/autotest_common.sh@10 -- # set +x 00:10:51.487 20:42:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.487 20:42:15 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.487 20:42:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.487 20:42:15 -- common/autotest_common.sh@10 -- # set +x 00:10:51.487 [2024-04-24 20:42:15.874616] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.487 20:42:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.487 20:42:15 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:51.487 20:42:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.487 20:42:15 -- common/autotest_common.sh@10 -- # set +x 00:10:51.487 20:42:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.487 20:42:15 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.487 20:42:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.487 20:42:15 -- common/autotest_common.sh@10 -- # set +x 00:10:51.487 [2024-04-24 20:42:15.909889] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.487 20:42:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.487 20:42:15 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:51.487 20:42:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.487 20:42:15 -- common/autotest_common.sh@10 -- # set +x 00:10:51.487 NULL1 00:10:51.487 20:42:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.487 20:42:15 -- target/connect_stress.sh@21 -- # PERF_PID=2667537 00:10:51.487 20:42:15 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:51.487 20:42:15 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:51.487 20:42:15 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # seq 1 20 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:15 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:15 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:16 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:16 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:16 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:16 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:16 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:51.487 20:42:16 -- target/connect_stress.sh@28 -- # cat 00:10:51.487 20:42:16 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:51.487 20:42:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.487 20:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.487 20:42:16 -- common/autotest_common.sh@10 -- # set +x 00:10:51.749 20:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:51.749 20:42:16 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:51.749 20:42:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.749 20:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:51.749 20:42:16 -- common/autotest_common.sh@10 -- # set +x 00:10:52.320 20:42:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.320 20:42:16 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:52.320 20:42:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.320 20:42:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.320 20:42:16 -- common/autotest_common.sh@10 -- # set +x 00:10:52.581 20:42:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.581 20:42:17 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:52.581 20:42:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.581 20:42:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.581 20:42:17 -- common/autotest_common.sh@10 -- # set +x 00:10:52.842 20:42:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:52.842 20:42:17 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:52.842 20:42:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.842 20:42:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:52.842 20:42:17 -- common/autotest_common.sh@10 -- # set +x 00:10:53.103 20:42:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.103 20:42:17 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:53.103 20:42:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.103 20:42:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.103 20:42:17 -- common/autotest_common.sh@10 -- # set +x 00:10:53.364 20:42:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.364 20:42:17 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:53.364 20:42:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.364 20:42:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.364 20:42:17 -- common/autotest_common.sh@10 -- # set +x 00:10:53.936 20:42:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:53.936 20:42:18 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:53.936 20:42:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.936 20:42:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:53.936 20:42:18 -- common/autotest_common.sh@10 -- # set +x 00:10:54.197 20:42:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.197 20:42:18 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:54.197 20:42:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.197 20:42:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.197 20:42:18 -- common/autotest_common.sh@10 -- # set +x 00:10:54.458 20:42:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.458 20:42:18 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:54.458 20:42:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.458 20:42:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.458 20:42:18 -- common/autotest_common.sh@10 -- # set +x 00:10:54.719 20:42:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.719 20:42:19 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:54.719 20:42:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.719 20:42:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.719 20:42:19 -- common/autotest_common.sh@10 -- # set +x 00:10:54.981 20:42:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:54.981 20:42:19 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:54.981 20:42:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.981 20:42:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:54.981 20:42:19 -- common/autotest_common.sh@10 -- # set +x 00:10:55.554 20:42:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.554 20:42:19 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:55.554 20:42:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.554 20:42:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.554 20:42:19 -- common/autotest_common.sh@10 -- # set +x 00:10:55.815 20:42:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:55.815 20:42:20 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:55.815 20:42:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.815 20:42:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:55.815 20:42:20 -- common/autotest_common.sh@10 -- # set +x 00:10:56.076 20:42:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.076 20:42:20 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:56.076 20:42:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.076 20:42:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.076 20:42:20 -- common/autotest_common.sh@10 -- # set +x 00:10:56.336 20:42:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.336 20:42:20 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:56.336 20:42:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.336 20:42:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.336 20:42:20 -- common/autotest_common.sh@10 -- # set +x 00:10:56.597 20:42:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:56.597 20:42:21 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:56.597 20:42:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.597 20:42:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:56.597 20:42:21 -- common/autotest_common.sh@10 -- # set +x 00:10:57.169 20:42:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.169 20:42:21 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:57.169 20:42:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.169 20:42:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.169 20:42:21 -- common/autotest_common.sh@10 -- # set +x 00:10:57.430 20:42:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.430 20:42:21 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:57.430 20:42:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.430 20:42:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.430 20:42:21 -- common/autotest_common.sh@10 -- # set +x 00:10:57.691 20:42:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.691 20:42:22 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:57.691 20:42:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.691 20:42:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.691 20:42:22 -- common/autotest_common.sh@10 -- # set +x 00:10:57.952 20:42:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:57.952 20:42:22 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:57.952 20:42:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.952 20:42:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:57.952 20:42:22 -- common/autotest_common.sh@10 -- # set +x 00:10:58.233 20:42:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.233 20:42:22 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:58.233 20:42:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.233 20:42:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.233 20:42:22 -- common/autotest_common.sh@10 -- # set +x 00:10:58.806 20:42:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:58.806 20:42:23 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:58.806 20:42:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:58.806 20:42:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:58.806 20:42:23 -- common/autotest_common.sh@10 -- # set +x 00:10:59.066 20:42:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.066 20:42:23 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:59.066 20:42:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.066 20:42:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.066 20:42:23 -- common/autotest_common.sh@10 -- # set +x 00:10:59.327 20:42:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.327 20:42:23 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:59.327 20:42:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.327 20:42:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.327 20:42:23 -- common/autotest_common.sh@10 -- # set +x 00:10:59.587 20:42:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:59.587 20:42:24 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:10:59.587 20:42:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:59.587 20:42:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:59.587 20:42:24 -- common/autotest_common.sh@10 -- # set +x 00:11:00.159 20:42:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.159 20:42:24 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:11:00.159 20:42:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.159 20:42:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.159 20:42:24 -- common/autotest_common.sh@10 -- # set +x 00:11:00.420 20:42:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.420 20:42:24 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:11:00.420 20:42:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.420 20:42:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.420 20:42:24 -- common/autotest_common.sh@10 -- # set +x 00:11:00.681 20:42:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.681 20:42:25 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:11:00.681 20:42:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.681 20:42:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.681 20:42:25 -- common/autotest_common.sh@10 -- # set +x 00:11:00.941 20:42:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:00.941 20:42:25 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:11:00.941 20:42:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:00.941 20:42:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:00.941 20:42:25 -- common/autotest_common.sh@10 -- # set +x 00:11:01.202 20:42:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.202 20:42:25 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:11:01.202 20:42:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:01.202 20:42:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:01.202 20:42:25 -- common/autotest_common.sh@10 -- # set +x 00:11:01.462 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:01.722 20:42:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:01.722 20:42:26 -- target/connect_stress.sh@34 -- # kill -0 2667537 00:11:01.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2667537) - No such process 00:11:01.722 20:42:26 -- target/connect_stress.sh@38 -- # wait 2667537 00:11:01.722 20:42:26 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:01.722 20:42:26 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:01.722 20:42:26 -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:01.722 20:42:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:01.722 20:42:26 -- nvmf/common.sh@117 -- # sync 00:11:01.722 20:42:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:01.722 20:42:26 -- nvmf/common.sh@120 -- # set +e 00:11:01.722 20:42:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:01.722 20:42:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:01.722 rmmod nvme_tcp 00:11:01.722 rmmod nvme_fabrics 00:11:01.722 rmmod nvme_keyring 00:11:01.722 20:42:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:01.722 20:42:26 -- nvmf/common.sh@124 -- # set -e 00:11:01.722 20:42:26 -- nvmf/common.sh@125 -- # return 0 00:11:01.722 20:42:26 -- nvmf/common.sh@478 -- # '[' -n 2667484 ']' 00:11:01.722 20:42:26 -- nvmf/common.sh@479 -- # killprocess 2667484 00:11:01.722 20:42:26 -- common/autotest_common.sh@936 -- # '[' -z 2667484 ']' 00:11:01.722 20:42:26 -- common/autotest_common.sh@940 -- # kill -0 2667484 00:11:01.722 20:42:26 -- common/autotest_common.sh@941 -- # uname 00:11:01.722 20:42:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:01.722 20:42:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2667484 00:11:01.722 20:42:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:01.722 20:42:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:01.722 20:42:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2667484' 00:11:01.722 killing process with pid 2667484 00:11:01.722 20:42:26 -- common/autotest_common.sh@955 -- # kill 2667484 00:11:01.722 20:42:26 -- common/autotest_common.sh@960 -- # wait 2667484 00:11:01.983 20:42:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:01.983 20:42:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:01.983 20:42:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:01.983 20:42:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:01.983 20:42:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:01.983 20:42:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:01.983 20:42:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:01.983 20:42:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.897 20:42:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:03.897 00:11:03.897 real 0m20.927s 00:11:03.897 user 0m42.546s 00:11:03.897 sys 0m8.637s 00:11:03.897 20:42:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:03.897 20:42:28 -- common/autotest_common.sh@10 -- # set +x 00:11:03.897 ************************************ 00:11:03.897 END TEST nvmf_connect_stress 00:11:03.897 ************************************ 00:11:03.897 20:42:28 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:03.897 20:42:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:03.897 20:42:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:03.897 20:42:28 -- common/autotest_common.sh@10 -- # set +x 00:11:04.158 ************************************ 00:11:04.158 START TEST nvmf_fused_ordering 00:11:04.158 ************************************ 00:11:04.158 20:42:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:04.158 * Looking for test storage... 00:11:04.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.158 20:42:28 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.158 20:42:28 -- nvmf/common.sh@7 -- # uname -s 00:11:04.158 20:42:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.158 20:42:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.158 20:42:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.158 20:42:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.158 20:42:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.158 20:42:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.158 20:42:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.158 20:42:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.158 20:42:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.158 20:42:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.158 20:42:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:04.158 20:42:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:04.158 20:42:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.158 20:42:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.158 20:42:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.158 20:42:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.158 20:42:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.158 20:42:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.158 20:42:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.158 20:42:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.158 20:42:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.158 20:42:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.158 20:42:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.158 20:42:28 -- paths/export.sh@5 -- # export PATH 00:11:04.158 20:42:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.158 20:42:28 -- nvmf/common.sh@47 -- # : 0 00:11:04.158 20:42:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:04.158 20:42:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:04.158 20:42:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.158 20:42:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.158 20:42:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.158 20:42:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:04.158 20:42:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:04.158 20:42:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:04.158 20:42:28 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:04.158 20:42:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:04.158 20:42:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.158 20:42:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:04.158 20:42:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:04.158 20:42:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:04.158 20:42:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.158 20:42:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.158 20:42:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.158 20:42:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:04.158 20:42:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:04.158 20:42:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:04.159 20:42:28 -- common/autotest_common.sh@10 -- # set +x 00:11:12.297 20:42:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:12.297 20:42:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.297 20:42:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.297 20:42:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.297 20:42:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.297 20:42:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.297 20:42:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.297 20:42:35 -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.297 20:42:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.297 20:42:35 -- nvmf/common.sh@296 -- # e810=() 00:11:12.297 20:42:35 -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.297 20:42:35 -- nvmf/common.sh@297 -- # x722=() 00:11:12.297 20:42:35 -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.297 20:42:35 -- nvmf/common.sh@298 -- # mlx=() 00:11:12.297 20:42:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.297 20:42:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.297 20:42:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.297 20:42:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.297 20:42:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.297 20:42:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.297 20:42:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.297 20:42:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.297 20:42:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.297 20:42:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.297 20:42:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.297 20:42:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.297 20:42:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.297 20:42:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.297 20:42:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.297 20:42:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.297 20:42:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:12.297 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:12.297 20:42:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.297 20:42:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:12.297 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:12.297 20:42:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.297 20:42:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.297 20:42:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.297 20:42:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:12.297 20:42:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.297 20:42:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:12.297 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:12.297 20:42:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.297 20:42:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.297 20:42:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.297 20:42:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:12.297 20:42:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.297 20:42:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:12.297 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:12.297 20:42:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.297 20:42:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:12.297 20:42:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:12.297 20:42:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:12.297 20:42:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:12.297 20:42:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.297 20:42:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.297 20:42:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.297 20:42:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.297 20:42:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.297 20:42:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.297 20:42:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.297 20:42:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.297 20:42:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.297 20:42:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.297 20:42:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.297 20:42:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.297 20:42:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.297 20:42:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.297 20:42:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.297 20:42:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.297 20:42:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.297 20:42:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.297 20:42:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.297 20:42:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:11:12.297 00:11:12.297 --- 10.0.0.2 ping statistics --- 00:11:12.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.297 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:11:12.297 20:42:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:11:12.297 00:11:12.297 --- 10.0.0.1 ping statistics --- 00:11:12.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.297 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:11:12.297 20:42:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.297 20:42:36 -- nvmf/common.sh@411 -- # return 0 00:11:12.297 20:42:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:12.297 20:42:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.297 20:42:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:12.297 20:42:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:12.297 20:42:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.297 20:42:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:12.297 20:42:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:12.297 20:42:36 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:12.297 20:42:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:12.297 20:42:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:12.297 20:42:36 -- common/autotest_common.sh@10 -- # set +x 00:11:12.297 20:42:36 -- nvmf/common.sh@470 -- # nvmfpid=2673876 00:11:12.297 20:42:36 -- nvmf/common.sh@471 -- # waitforlisten 2673876 00:11:12.297 20:42:36 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:12.297 20:42:36 -- common/autotest_common.sh@817 -- # '[' -z 2673876 ']' 00:11:12.297 20:42:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.297 20:42:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:12.297 20:42:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.298 20:42:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:12.298 20:42:36 -- common/autotest_common.sh@10 -- # set +x 00:11:12.298 [2024-04-24 20:42:36.218941] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:11:12.298 [2024-04-24 20:42:36.219005] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.298 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.298 [2024-04-24 20:42:36.289300] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.298 [2024-04-24 20:42:36.360542] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.298 [2024-04-24 20:42:36.360581] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.298 [2024-04-24 20:42:36.360589] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.298 [2024-04-24 20:42:36.360595] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.298 [2024-04-24 20:42:36.360601] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.298 [2024-04-24 20:42:36.360626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.559 20:42:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:12.559 20:42:37 -- common/autotest_common.sh@850 -- # return 0 00:11:12.559 20:42:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:12.559 20:42:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:12.559 20:42:37 -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 20:42:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.559 20:42:37 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:12.559 20:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.559 20:42:37 -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 [2024-04-24 20:42:37.123393] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.559 20:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.559 20:42:37 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:12.559 20:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.559 20:42:37 -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 20:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.559 20:42:37 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.559 20:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.559 20:42:37 -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 [2024-04-24 20:42:37.147563] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.559 20:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.559 20:42:37 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:12.559 20:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.559 20:42:37 -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 NULL1 00:11:12.559 20:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.559 20:42:37 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:12.559 20:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.559 20:42:37 -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 20:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.559 20:42:37 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:12.559 20:42:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:12.559 20:42:37 -- common/autotest_common.sh@10 -- # set +x 00:11:12.559 20:42:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:12.559 20:42:37 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:12.820 [2024-04-24 20:42:37.211155] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:11:12.820 [2024-04-24 20:42:37.211192] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2674119 ] 00:11:12.820 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.392 Attached to nqn.2016-06.io.spdk:cnode1 00:11:13.392 Namespace ID: 1 size: 1GB 00:11:13.392 fused_ordering(0) 00:11:13.392 fused_ordering(1) 00:11:13.392 fused_ordering(2) 00:11:13.392 fused_ordering(3) 00:11:13.392 fused_ordering(4) 00:11:13.392 fused_ordering(5) 00:11:13.392 fused_ordering(6) 00:11:13.392 fused_ordering(7) 00:11:13.392 fused_ordering(8) 00:11:13.392 fused_ordering(9) 00:11:13.392 fused_ordering(10) 00:11:13.392 fused_ordering(11) 00:11:13.392 fused_ordering(12) 00:11:13.392 fused_ordering(13) 00:11:13.392 fused_ordering(14) 00:11:13.392 fused_ordering(15) 00:11:13.392 fused_ordering(16) 00:11:13.392 fused_ordering(17) 00:11:13.392 fused_ordering(18) 00:11:13.392 fused_ordering(19) 00:11:13.392 fused_ordering(20) 00:11:13.392 fused_ordering(21) 00:11:13.392 fused_ordering(22) 00:11:13.392 fused_ordering(23) 00:11:13.392 fused_ordering(24) 00:11:13.392 fused_ordering(25) 00:11:13.392 fused_ordering(26) 00:11:13.392 fused_ordering(27) 00:11:13.392 fused_ordering(28) 00:11:13.392 fused_ordering(29) 00:11:13.392 fused_ordering(30) 00:11:13.392 fused_ordering(31) 00:11:13.392 fused_ordering(32) 00:11:13.392 fused_ordering(33) 00:11:13.392 fused_ordering(34) 00:11:13.392 fused_ordering(35) 00:11:13.392 fused_ordering(36) 00:11:13.392 fused_ordering(37) 00:11:13.392 fused_ordering(38) 00:11:13.392 fused_ordering(39) 00:11:13.392 fused_ordering(40) 00:11:13.392 fused_ordering(41) 00:11:13.392 fused_ordering(42) 00:11:13.392 fused_ordering(43) 00:11:13.392 fused_ordering(44) 00:11:13.392 fused_ordering(45) 00:11:13.392 fused_ordering(46) 00:11:13.392 fused_ordering(47) 00:11:13.392 fused_ordering(48) 00:11:13.392 fused_ordering(49) 00:11:13.392 fused_ordering(50) 00:11:13.392 fused_ordering(51) 00:11:13.392 fused_ordering(52) 00:11:13.392 fused_ordering(53) 00:11:13.392 fused_ordering(54) 00:11:13.392 fused_ordering(55) 00:11:13.392 fused_ordering(56) 00:11:13.392 fused_ordering(57) 00:11:13.392 fused_ordering(58) 00:11:13.392 fused_ordering(59) 00:11:13.392 fused_ordering(60) 00:11:13.392 fused_ordering(61) 00:11:13.392 fused_ordering(62) 00:11:13.392 fused_ordering(63) 00:11:13.392 fused_ordering(64) 00:11:13.392 fused_ordering(65) 00:11:13.392 fused_ordering(66) 00:11:13.392 fused_ordering(67) 00:11:13.392 fused_ordering(68) 00:11:13.392 fused_ordering(69) 00:11:13.392 fused_ordering(70) 00:11:13.392 fused_ordering(71) 00:11:13.392 fused_ordering(72) 00:11:13.392 fused_ordering(73) 00:11:13.392 fused_ordering(74) 00:11:13.392 fused_ordering(75) 00:11:13.392 fused_ordering(76) 00:11:13.392 fused_ordering(77) 00:11:13.392 fused_ordering(78) 00:11:13.392 fused_ordering(79) 00:11:13.392 fused_ordering(80) 00:11:13.392 fused_ordering(81) 00:11:13.392 fused_ordering(82) 00:11:13.392 fused_ordering(83) 00:11:13.392 fused_ordering(84) 00:11:13.392 fused_ordering(85) 00:11:13.392 fused_ordering(86) 00:11:13.392 fused_ordering(87) 00:11:13.392 fused_ordering(88) 00:11:13.392 fused_ordering(89) 00:11:13.392 fused_ordering(90) 00:11:13.392 fused_ordering(91) 00:11:13.392 fused_ordering(92) 00:11:13.392 fused_ordering(93) 00:11:13.392 fused_ordering(94) 00:11:13.392 fused_ordering(95) 00:11:13.392 fused_ordering(96) 00:11:13.392 fused_ordering(97) 00:11:13.392 fused_ordering(98) 00:11:13.392 fused_ordering(99) 00:11:13.392 fused_ordering(100) 00:11:13.392 fused_ordering(101) 00:11:13.392 fused_ordering(102) 00:11:13.392 fused_ordering(103) 00:11:13.392 fused_ordering(104) 00:11:13.392 fused_ordering(105) 00:11:13.392 fused_ordering(106) 00:11:13.392 fused_ordering(107) 00:11:13.392 fused_ordering(108) 00:11:13.392 fused_ordering(109) 00:11:13.392 fused_ordering(110) 00:11:13.392 fused_ordering(111) 00:11:13.392 fused_ordering(112) 00:11:13.392 fused_ordering(113) 00:11:13.392 fused_ordering(114) 00:11:13.392 fused_ordering(115) 00:11:13.392 fused_ordering(116) 00:11:13.392 fused_ordering(117) 00:11:13.392 fused_ordering(118) 00:11:13.392 fused_ordering(119) 00:11:13.392 fused_ordering(120) 00:11:13.392 fused_ordering(121) 00:11:13.392 fused_ordering(122) 00:11:13.392 fused_ordering(123) 00:11:13.392 fused_ordering(124) 00:11:13.392 fused_ordering(125) 00:11:13.392 fused_ordering(126) 00:11:13.392 fused_ordering(127) 00:11:13.392 fused_ordering(128) 00:11:13.392 fused_ordering(129) 00:11:13.392 fused_ordering(130) 00:11:13.392 fused_ordering(131) 00:11:13.392 fused_ordering(132) 00:11:13.392 fused_ordering(133) 00:11:13.392 fused_ordering(134) 00:11:13.392 fused_ordering(135) 00:11:13.392 fused_ordering(136) 00:11:13.392 fused_ordering(137) 00:11:13.392 fused_ordering(138) 00:11:13.392 fused_ordering(139) 00:11:13.392 fused_ordering(140) 00:11:13.392 fused_ordering(141) 00:11:13.392 fused_ordering(142) 00:11:13.392 fused_ordering(143) 00:11:13.392 fused_ordering(144) 00:11:13.392 fused_ordering(145) 00:11:13.392 fused_ordering(146) 00:11:13.392 fused_ordering(147) 00:11:13.392 fused_ordering(148) 00:11:13.392 fused_ordering(149) 00:11:13.392 fused_ordering(150) 00:11:13.392 fused_ordering(151) 00:11:13.392 fused_ordering(152) 00:11:13.392 fused_ordering(153) 00:11:13.392 fused_ordering(154) 00:11:13.392 fused_ordering(155) 00:11:13.392 fused_ordering(156) 00:11:13.392 fused_ordering(157) 00:11:13.392 fused_ordering(158) 00:11:13.392 fused_ordering(159) 00:11:13.392 fused_ordering(160) 00:11:13.392 fused_ordering(161) 00:11:13.392 fused_ordering(162) 00:11:13.392 fused_ordering(163) 00:11:13.392 fused_ordering(164) 00:11:13.392 fused_ordering(165) 00:11:13.392 fused_ordering(166) 00:11:13.393 fused_ordering(167) 00:11:13.393 fused_ordering(168) 00:11:13.393 fused_ordering(169) 00:11:13.393 fused_ordering(170) 00:11:13.393 fused_ordering(171) 00:11:13.393 fused_ordering(172) 00:11:13.393 fused_ordering(173) 00:11:13.393 fused_ordering(174) 00:11:13.393 fused_ordering(175) 00:11:13.393 fused_ordering(176) 00:11:13.393 fused_ordering(177) 00:11:13.393 fused_ordering(178) 00:11:13.393 fused_ordering(179) 00:11:13.393 fused_ordering(180) 00:11:13.393 fused_ordering(181) 00:11:13.393 fused_ordering(182) 00:11:13.393 fused_ordering(183) 00:11:13.393 fused_ordering(184) 00:11:13.393 fused_ordering(185) 00:11:13.393 fused_ordering(186) 00:11:13.393 fused_ordering(187) 00:11:13.393 fused_ordering(188) 00:11:13.393 fused_ordering(189) 00:11:13.393 fused_ordering(190) 00:11:13.393 fused_ordering(191) 00:11:13.393 fused_ordering(192) 00:11:13.393 fused_ordering(193) 00:11:13.393 fused_ordering(194) 00:11:13.393 fused_ordering(195) 00:11:13.393 fused_ordering(196) 00:11:13.393 fused_ordering(197) 00:11:13.393 fused_ordering(198) 00:11:13.393 fused_ordering(199) 00:11:13.393 fused_ordering(200) 00:11:13.393 fused_ordering(201) 00:11:13.393 fused_ordering(202) 00:11:13.393 fused_ordering(203) 00:11:13.393 fused_ordering(204) 00:11:13.393 fused_ordering(205) 00:11:13.653 fused_ordering(206) 00:11:13.653 fused_ordering(207) 00:11:13.653 fused_ordering(208) 00:11:13.653 fused_ordering(209) 00:11:13.653 fused_ordering(210) 00:11:13.653 fused_ordering(211) 00:11:13.653 fused_ordering(212) 00:11:13.653 fused_ordering(213) 00:11:13.653 fused_ordering(214) 00:11:13.653 fused_ordering(215) 00:11:13.653 fused_ordering(216) 00:11:13.653 fused_ordering(217) 00:11:13.653 fused_ordering(218) 00:11:13.653 fused_ordering(219) 00:11:13.653 fused_ordering(220) 00:11:13.653 fused_ordering(221) 00:11:13.653 fused_ordering(222) 00:11:13.653 fused_ordering(223) 00:11:13.653 fused_ordering(224) 00:11:13.653 fused_ordering(225) 00:11:13.653 fused_ordering(226) 00:11:13.653 fused_ordering(227) 00:11:13.653 fused_ordering(228) 00:11:13.653 fused_ordering(229) 00:11:13.653 fused_ordering(230) 00:11:13.653 fused_ordering(231) 00:11:13.653 fused_ordering(232) 00:11:13.653 fused_ordering(233) 00:11:13.654 fused_ordering(234) 00:11:13.654 fused_ordering(235) 00:11:13.654 fused_ordering(236) 00:11:13.654 fused_ordering(237) 00:11:13.654 fused_ordering(238) 00:11:13.654 fused_ordering(239) 00:11:13.654 fused_ordering(240) 00:11:13.654 fused_ordering(241) 00:11:13.654 fused_ordering(242) 00:11:13.654 fused_ordering(243) 00:11:13.654 fused_ordering(244) 00:11:13.654 fused_ordering(245) 00:11:13.654 fused_ordering(246) 00:11:13.654 fused_ordering(247) 00:11:13.654 fused_ordering(248) 00:11:13.654 fused_ordering(249) 00:11:13.654 fused_ordering(250) 00:11:13.654 fused_ordering(251) 00:11:13.654 fused_ordering(252) 00:11:13.654 fused_ordering(253) 00:11:13.654 fused_ordering(254) 00:11:13.654 fused_ordering(255) 00:11:13.654 fused_ordering(256) 00:11:13.654 fused_ordering(257) 00:11:13.654 fused_ordering(258) 00:11:13.654 fused_ordering(259) 00:11:13.654 fused_ordering(260) 00:11:13.654 fused_ordering(261) 00:11:13.654 fused_ordering(262) 00:11:13.654 fused_ordering(263) 00:11:13.654 fused_ordering(264) 00:11:13.654 fused_ordering(265) 00:11:13.654 fused_ordering(266) 00:11:13.654 fused_ordering(267) 00:11:13.654 fused_ordering(268) 00:11:13.654 fused_ordering(269) 00:11:13.654 fused_ordering(270) 00:11:13.654 fused_ordering(271) 00:11:13.654 fused_ordering(272) 00:11:13.654 fused_ordering(273) 00:11:13.654 fused_ordering(274) 00:11:13.654 fused_ordering(275) 00:11:13.654 fused_ordering(276) 00:11:13.654 fused_ordering(277) 00:11:13.654 fused_ordering(278) 00:11:13.654 fused_ordering(279) 00:11:13.654 fused_ordering(280) 00:11:13.654 fused_ordering(281) 00:11:13.654 fused_ordering(282) 00:11:13.654 fused_ordering(283) 00:11:13.654 fused_ordering(284) 00:11:13.654 fused_ordering(285) 00:11:13.654 fused_ordering(286) 00:11:13.654 fused_ordering(287) 00:11:13.654 fused_ordering(288) 00:11:13.654 fused_ordering(289) 00:11:13.654 fused_ordering(290) 00:11:13.654 fused_ordering(291) 00:11:13.654 fused_ordering(292) 00:11:13.654 fused_ordering(293) 00:11:13.654 fused_ordering(294) 00:11:13.654 fused_ordering(295) 00:11:13.654 fused_ordering(296) 00:11:13.654 fused_ordering(297) 00:11:13.654 fused_ordering(298) 00:11:13.654 fused_ordering(299) 00:11:13.654 fused_ordering(300) 00:11:13.654 fused_ordering(301) 00:11:13.654 fused_ordering(302) 00:11:13.654 fused_ordering(303) 00:11:13.654 fused_ordering(304) 00:11:13.654 fused_ordering(305) 00:11:13.654 fused_ordering(306) 00:11:13.654 fused_ordering(307) 00:11:13.654 fused_ordering(308) 00:11:13.654 fused_ordering(309) 00:11:13.654 fused_ordering(310) 00:11:13.654 fused_ordering(311) 00:11:13.654 fused_ordering(312) 00:11:13.654 fused_ordering(313) 00:11:13.654 fused_ordering(314) 00:11:13.654 fused_ordering(315) 00:11:13.654 fused_ordering(316) 00:11:13.654 fused_ordering(317) 00:11:13.654 fused_ordering(318) 00:11:13.654 fused_ordering(319) 00:11:13.654 fused_ordering(320) 00:11:13.654 fused_ordering(321) 00:11:13.654 fused_ordering(322) 00:11:13.654 fused_ordering(323) 00:11:13.654 fused_ordering(324) 00:11:13.654 fused_ordering(325) 00:11:13.654 fused_ordering(326) 00:11:13.654 fused_ordering(327) 00:11:13.654 fused_ordering(328) 00:11:13.654 fused_ordering(329) 00:11:13.654 fused_ordering(330) 00:11:13.654 fused_ordering(331) 00:11:13.654 fused_ordering(332) 00:11:13.654 fused_ordering(333) 00:11:13.654 fused_ordering(334) 00:11:13.654 fused_ordering(335) 00:11:13.654 fused_ordering(336) 00:11:13.654 fused_ordering(337) 00:11:13.654 fused_ordering(338) 00:11:13.654 fused_ordering(339) 00:11:13.654 fused_ordering(340) 00:11:13.654 fused_ordering(341) 00:11:13.654 fused_ordering(342) 00:11:13.654 fused_ordering(343) 00:11:13.654 fused_ordering(344) 00:11:13.654 fused_ordering(345) 00:11:13.654 fused_ordering(346) 00:11:13.654 fused_ordering(347) 00:11:13.654 fused_ordering(348) 00:11:13.654 fused_ordering(349) 00:11:13.654 fused_ordering(350) 00:11:13.654 fused_ordering(351) 00:11:13.654 fused_ordering(352) 00:11:13.654 fused_ordering(353) 00:11:13.654 fused_ordering(354) 00:11:13.654 fused_ordering(355) 00:11:13.654 fused_ordering(356) 00:11:13.654 fused_ordering(357) 00:11:13.654 fused_ordering(358) 00:11:13.654 fused_ordering(359) 00:11:13.654 fused_ordering(360) 00:11:13.654 fused_ordering(361) 00:11:13.654 fused_ordering(362) 00:11:13.654 fused_ordering(363) 00:11:13.654 fused_ordering(364) 00:11:13.654 fused_ordering(365) 00:11:13.654 fused_ordering(366) 00:11:13.654 fused_ordering(367) 00:11:13.654 fused_ordering(368) 00:11:13.654 fused_ordering(369) 00:11:13.654 fused_ordering(370) 00:11:13.654 fused_ordering(371) 00:11:13.654 fused_ordering(372) 00:11:13.654 fused_ordering(373) 00:11:13.654 fused_ordering(374) 00:11:13.654 fused_ordering(375) 00:11:13.654 fused_ordering(376) 00:11:13.654 fused_ordering(377) 00:11:13.654 fused_ordering(378) 00:11:13.654 fused_ordering(379) 00:11:13.654 fused_ordering(380) 00:11:13.654 fused_ordering(381) 00:11:13.654 fused_ordering(382) 00:11:13.654 fused_ordering(383) 00:11:13.654 fused_ordering(384) 00:11:13.654 fused_ordering(385) 00:11:13.654 fused_ordering(386) 00:11:13.654 fused_ordering(387) 00:11:13.654 fused_ordering(388) 00:11:13.654 fused_ordering(389) 00:11:13.654 fused_ordering(390) 00:11:13.654 fused_ordering(391) 00:11:13.654 fused_ordering(392) 00:11:13.654 fused_ordering(393) 00:11:13.654 fused_ordering(394) 00:11:13.654 fused_ordering(395) 00:11:13.654 fused_ordering(396) 00:11:13.654 fused_ordering(397) 00:11:13.654 fused_ordering(398) 00:11:13.654 fused_ordering(399) 00:11:13.654 fused_ordering(400) 00:11:13.654 fused_ordering(401) 00:11:13.654 fused_ordering(402) 00:11:13.654 fused_ordering(403) 00:11:13.654 fused_ordering(404) 00:11:13.654 fused_ordering(405) 00:11:13.654 fused_ordering(406) 00:11:13.654 fused_ordering(407) 00:11:13.654 fused_ordering(408) 00:11:13.654 fused_ordering(409) 00:11:13.654 fused_ordering(410) 00:11:13.915 fused_ordering(411) 00:11:13.915 fused_ordering(412) 00:11:13.915 fused_ordering(413) 00:11:13.915 fused_ordering(414) 00:11:13.915 fused_ordering(415) 00:11:13.915 fused_ordering(416) 00:11:13.915 fused_ordering(417) 00:11:13.915 fused_ordering(418) 00:11:13.915 fused_ordering(419) 00:11:13.915 fused_ordering(420) 00:11:13.915 fused_ordering(421) 00:11:13.915 fused_ordering(422) 00:11:13.915 fused_ordering(423) 00:11:13.915 fused_ordering(424) 00:11:13.915 fused_ordering(425) 00:11:13.915 fused_ordering(426) 00:11:13.915 fused_ordering(427) 00:11:13.915 fused_ordering(428) 00:11:13.915 fused_ordering(429) 00:11:13.915 fused_ordering(430) 00:11:13.915 fused_ordering(431) 00:11:13.915 fused_ordering(432) 00:11:13.915 fused_ordering(433) 00:11:13.915 fused_ordering(434) 00:11:13.915 fused_ordering(435) 00:11:13.915 fused_ordering(436) 00:11:13.915 fused_ordering(437) 00:11:13.915 fused_ordering(438) 00:11:13.915 fused_ordering(439) 00:11:13.915 fused_ordering(440) 00:11:13.915 fused_ordering(441) 00:11:13.915 fused_ordering(442) 00:11:13.915 fused_ordering(443) 00:11:13.915 fused_ordering(444) 00:11:13.915 fused_ordering(445) 00:11:13.915 fused_ordering(446) 00:11:13.915 fused_ordering(447) 00:11:13.915 fused_ordering(448) 00:11:13.915 fused_ordering(449) 00:11:13.915 fused_ordering(450) 00:11:13.915 fused_ordering(451) 00:11:13.915 fused_ordering(452) 00:11:13.915 fused_ordering(453) 00:11:13.915 fused_ordering(454) 00:11:13.915 fused_ordering(455) 00:11:13.915 fused_ordering(456) 00:11:13.915 fused_ordering(457) 00:11:13.915 fused_ordering(458) 00:11:13.915 fused_ordering(459) 00:11:13.915 fused_ordering(460) 00:11:13.915 fused_ordering(461) 00:11:13.915 fused_ordering(462) 00:11:13.915 fused_ordering(463) 00:11:13.915 fused_ordering(464) 00:11:13.915 fused_ordering(465) 00:11:13.915 fused_ordering(466) 00:11:13.915 fused_ordering(467) 00:11:13.915 fused_ordering(468) 00:11:13.915 fused_ordering(469) 00:11:13.915 fused_ordering(470) 00:11:13.915 fused_ordering(471) 00:11:13.915 fused_ordering(472) 00:11:13.915 fused_ordering(473) 00:11:13.915 fused_ordering(474) 00:11:13.915 fused_ordering(475) 00:11:13.915 fused_ordering(476) 00:11:13.916 fused_ordering(477) 00:11:13.916 fused_ordering(478) 00:11:13.916 fused_ordering(479) 00:11:13.916 fused_ordering(480) 00:11:13.916 fused_ordering(481) 00:11:13.916 fused_ordering(482) 00:11:13.916 fused_ordering(483) 00:11:13.916 fused_ordering(484) 00:11:13.916 fused_ordering(485) 00:11:13.916 fused_ordering(486) 00:11:13.916 fused_ordering(487) 00:11:13.916 fused_ordering(488) 00:11:13.916 fused_ordering(489) 00:11:13.916 fused_ordering(490) 00:11:13.916 fused_ordering(491) 00:11:13.916 fused_ordering(492) 00:11:13.916 fused_ordering(493) 00:11:13.916 fused_ordering(494) 00:11:13.916 fused_ordering(495) 00:11:13.916 fused_ordering(496) 00:11:13.916 fused_ordering(497) 00:11:13.916 fused_ordering(498) 00:11:13.916 fused_ordering(499) 00:11:13.916 fused_ordering(500) 00:11:13.916 fused_ordering(501) 00:11:13.916 fused_ordering(502) 00:11:13.916 fused_ordering(503) 00:11:13.916 fused_ordering(504) 00:11:13.916 fused_ordering(505) 00:11:13.916 fused_ordering(506) 00:11:13.916 fused_ordering(507) 00:11:13.916 fused_ordering(508) 00:11:13.916 fused_ordering(509) 00:11:13.916 fused_ordering(510) 00:11:13.916 fused_ordering(511) 00:11:13.916 fused_ordering(512) 00:11:13.916 fused_ordering(513) 00:11:13.916 fused_ordering(514) 00:11:13.916 fused_ordering(515) 00:11:13.916 fused_ordering(516) 00:11:13.916 fused_ordering(517) 00:11:13.916 fused_ordering(518) 00:11:13.916 fused_ordering(519) 00:11:13.916 fused_ordering(520) 00:11:13.916 fused_ordering(521) 00:11:13.916 fused_ordering(522) 00:11:13.916 fused_ordering(523) 00:11:13.916 fused_ordering(524) 00:11:13.916 fused_ordering(525) 00:11:13.916 fused_ordering(526) 00:11:13.916 fused_ordering(527) 00:11:13.916 fused_ordering(528) 00:11:13.916 fused_ordering(529) 00:11:13.916 fused_ordering(530) 00:11:13.916 fused_ordering(531) 00:11:13.916 fused_ordering(532) 00:11:13.916 fused_ordering(533) 00:11:13.916 fused_ordering(534) 00:11:13.916 fused_ordering(535) 00:11:13.916 fused_ordering(536) 00:11:13.916 fused_ordering(537) 00:11:13.916 fused_ordering(538) 00:11:13.916 fused_ordering(539) 00:11:13.916 fused_ordering(540) 00:11:13.916 fused_ordering(541) 00:11:13.916 fused_ordering(542) 00:11:13.916 fused_ordering(543) 00:11:13.916 fused_ordering(544) 00:11:13.916 fused_ordering(545) 00:11:13.916 fused_ordering(546) 00:11:13.916 fused_ordering(547) 00:11:13.916 fused_ordering(548) 00:11:13.916 fused_ordering(549) 00:11:13.916 fused_ordering(550) 00:11:13.916 fused_ordering(551) 00:11:13.916 fused_ordering(552) 00:11:13.916 fused_ordering(553) 00:11:13.916 fused_ordering(554) 00:11:13.916 fused_ordering(555) 00:11:13.916 fused_ordering(556) 00:11:13.916 fused_ordering(557) 00:11:13.916 fused_ordering(558) 00:11:13.916 fused_ordering(559) 00:11:13.916 fused_ordering(560) 00:11:13.916 fused_ordering(561) 00:11:13.916 fused_ordering(562) 00:11:13.916 fused_ordering(563) 00:11:13.916 fused_ordering(564) 00:11:13.916 fused_ordering(565) 00:11:13.916 fused_ordering(566) 00:11:13.916 fused_ordering(567) 00:11:13.916 fused_ordering(568) 00:11:13.916 fused_ordering(569) 00:11:13.916 fused_ordering(570) 00:11:13.916 fused_ordering(571) 00:11:13.916 fused_ordering(572) 00:11:13.916 fused_ordering(573) 00:11:13.916 fused_ordering(574) 00:11:13.916 fused_ordering(575) 00:11:13.916 fused_ordering(576) 00:11:13.916 fused_ordering(577) 00:11:13.916 fused_ordering(578) 00:11:13.916 fused_ordering(579) 00:11:13.916 fused_ordering(580) 00:11:13.916 fused_ordering(581) 00:11:13.916 fused_ordering(582) 00:11:13.916 fused_ordering(583) 00:11:13.916 fused_ordering(584) 00:11:13.916 fused_ordering(585) 00:11:13.916 fused_ordering(586) 00:11:13.916 fused_ordering(587) 00:11:13.916 fused_ordering(588) 00:11:13.916 fused_ordering(589) 00:11:13.916 fused_ordering(590) 00:11:13.916 fused_ordering(591) 00:11:13.916 fused_ordering(592) 00:11:13.916 fused_ordering(593) 00:11:13.916 fused_ordering(594) 00:11:13.916 fused_ordering(595) 00:11:13.916 fused_ordering(596) 00:11:13.916 fused_ordering(597) 00:11:13.916 fused_ordering(598) 00:11:13.916 fused_ordering(599) 00:11:13.916 fused_ordering(600) 00:11:13.916 fused_ordering(601) 00:11:13.916 fused_ordering(602) 00:11:13.916 fused_ordering(603) 00:11:13.916 fused_ordering(604) 00:11:13.916 fused_ordering(605) 00:11:13.916 fused_ordering(606) 00:11:13.916 fused_ordering(607) 00:11:13.916 fused_ordering(608) 00:11:13.916 fused_ordering(609) 00:11:13.916 fused_ordering(610) 00:11:13.916 fused_ordering(611) 00:11:13.916 fused_ordering(612) 00:11:13.916 fused_ordering(613) 00:11:13.916 fused_ordering(614) 00:11:13.916 fused_ordering(615) 00:11:14.511 fused_ordering(616) 00:11:14.511 fused_ordering(617) 00:11:14.511 fused_ordering(618) 00:11:14.511 fused_ordering(619) 00:11:14.511 fused_ordering(620) 00:11:14.511 fused_ordering(621) 00:11:14.511 fused_ordering(622) 00:11:14.511 fused_ordering(623) 00:11:14.511 fused_ordering(624) 00:11:14.511 fused_ordering(625) 00:11:14.511 fused_ordering(626) 00:11:14.511 fused_ordering(627) 00:11:14.511 fused_ordering(628) 00:11:14.511 fused_ordering(629) 00:11:14.511 fused_ordering(630) 00:11:14.511 fused_ordering(631) 00:11:14.511 fused_ordering(632) 00:11:14.511 fused_ordering(633) 00:11:14.511 fused_ordering(634) 00:11:14.511 fused_ordering(635) 00:11:14.511 fused_ordering(636) 00:11:14.511 fused_ordering(637) 00:11:14.511 fused_ordering(638) 00:11:14.511 fused_ordering(639) 00:11:14.511 fused_ordering(640) 00:11:14.511 fused_ordering(641) 00:11:14.511 fused_ordering(642) 00:11:14.511 fused_ordering(643) 00:11:14.511 fused_ordering(644) 00:11:14.511 fused_ordering(645) 00:11:14.511 fused_ordering(646) 00:11:14.511 fused_ordering(647) 00:11:14.511 fused_ordering(648) 00:11:14.511 fused_ordering(649) 00:11:14.511 fused_ordering(650) 00:11:14.511 fused_ordering(651) 00:11:14.511 fused_ordering(652) 00:11:14.511 fused_ordering(653) 00:11:14.511 fused_ordering(654) 00:11:14.511 fused_ordering(655) 00:11:14.511 fused_ordering(656) 00:11:14.511 fused_ordering(657) 00:11:14.511 fused_ordering(658) 00:11:14.511 fused_ordering(659) 00:11:14.511 fused_ordering(660) 00:11:14.511 fused_ordering(661) 00:11:14.511 fused_ordering(662) 00:11:14.511 fused_ordering(663) 00:11:14.511 fused_ordering(664) 00:11:14.511 fused_ordering(665) 00:11:14.511 fused_ordering(666) 00:11:14.511 fused_ordering(667) 00:11:14.511 fused_ordering(668) 00:11:14.511 fused_ordering(669) 00:11:14.511 fused_ordering(670) 00:11:14.511 fused_ordering(671) 00:11:14.511 fused_ordering(672) 00:11:14.511 fused_ordering(673) 00:11:14.511 fused_ordering(674) 00:11:14.511 fused_ordering(675) 00:11:14.511 fused_ordering(676) 00:11:14.511 fused_ordering(677) 00:11:14.511 fused_ordering(678) 00:11:14.511 fused_ordering(679) 00:11:14.511 fused_ordering(680) 00:11:14.511 fused_ordering(681) 00:11:14.511 fused_ordering(682) 00:11:14.511 fused_ordering(683) 00:11:14.511 fused_ordering(684) 00:11:14.511 fused_ordering(685) 00:11:14.511 fused_ordering(686) 00:11:14.511 fused_ordering(687) 00:11:14.511 fused_ordering(688) 00:11:14.511 fused_ordering(689) 00:11:14.511 fused_ordering(690) 00:11:14.511 fused_ordering(691) 00:11:14.511 fused_ordering(692) 00:11:14.511 fused_ordering(693) 00:11:14.511 fused_ordering(694) 00:11:14.511 fused_ordering(695) 00:11:14.511 fused_ordering(696) 00:11:14.511 fused_ordering(697) 00:11:14.511 fused_ordering(698) 00:11:14.511 fused_ordering(699) 00:11:14.511 fused_ordering(700) 00:11:14.511 fused_ordering(701) 00:11:14.511 fused_ordering(702) 00:11:14.511 fused_ordering(703) 00:11:14.511 fused_ordering(704) 00:11:14.511 fused_ordering(705) 00:11:14.511 fused_ordering(706) 00:11:14.511 fused_ordering(707) 00:11:14.511 fused_ordering(708) 00:11:14.511 fused_ordering(709) 00:11:14.511 fused_ordering(710) 00:11:14.511 fused_ordering(711) 00:11:14.511 fused_ordering(712) 00:11:14.511 fused_ordering(713) 00:11:14.511 fused_ordering(714) 00:11:14.511 fused_ordering(715) 00:11:14.511 fused_ordering(716) 00:11:14.511 fused_ordering(717) 00:11:14.511 fused_ordering(718) 00:11:14.511 fused_ordering(719) 00:11:14.511 fused_ordering(720) 00:11:14.511 fused_ordering(721) 00:11:14.511 fused_ordering(722) 00:11:14.511 fused_ordering(723) 00:11:14.511 fused_ordering(724) 00:11:14.511 fused_ordering(725) 00:11:14.511 fused_ordering(726) 00:11:14.511 fused_ordering(727) 00:11:14.511 fused_ordering(728) 00:11:14.511 fused_ordering(729) 00:11:14.511 fused_ordering(730) 00:11:14.511 fused_ordering(731) 00:11:14.511 fused_ordering(732) 00:11:14.511 fused_ordering(733) 00:11:14.511 fused_ordering(734) 00:11:14.511 fused_ordering(735) 00:11:14.511 fused_ordering(736) 00:11:14.511 fused_ordering(737) 00:11:14.511 fused_ordering(738) 00:11:14.511 fused_ordering(739) 00:11:14.511 fused_ordering(740) 00:11:14.511 fused_ordering(741) 00:11:14.511 fused_ordering(742) 00:11:14.511 fused_ordering(743) 00:11:14.511 fused_ordering(744) 00:11:14.511 fused_ordering(745) 00:11:14.511 fused_ordering(746) 00:11:14.511 fused_ordering(747) 00:11:14.511 fused_ordering(748) 00:11:14.511 fused_ordering(749) 00:11:14.511 fused_ordering(750) 00:11:14.511 fused_ordering(751) 00:11:14.511 fused_ordering(752) 00:11:14.511 fused_ordering(753) 00:11:14.511 fused_ordering(754) 00:11:14.511 fused_ordering(755) 00:11:14.511 fused_ordering(756) 00:11:14.511 fused_ordering(757) 00:11:14.511 fused_ordering(758) 00:11:14.511 fused_ordering(759) 00:11:14.511 fused_ordering(760) 00:11:14.511 fused_ordering(761) 00:11:14.511 fused_ordering(762) 00:11:14.511 fused_ordering(763) 00:11:14.511 fused_ordering(764) 00:11:14.511 fused_ordering(765) 00:11:14.511 fused_ordering(766) 00:11:14.511 fused_ordering(767) 00:11:14.511 fused_ordering(768) 00:11:14.511 fused_ordering(769) 00:11:14.511 fused_ordering(770) 00:11:14.511 fused_ordering(771) 00:11:14.512 fused_ordering(772) 00:11:14.512 fused_ordering(773) 00:11:14.512 fused_ordering(774) 00:11:14.512 fused_ordering(775) 00:11:14.512 fused_ordering(776) 00:11:14.512 fused_ordering(777) 00:11:14.512 fused_ordering(778) 00:11:14.512 fused_ordering(779) 00:11:14.512 fused_ordering(780) 00:11:14.512 fused_ordering(781) 00:11:14.512 fused_ordering(782) 00:11:14.512 fused_ordering(783) 00:11:14.512 fused_ordering(784) 00:11:14.512 fused_ordering(785) 00:11:14.512 fused_ordering(786) 00:11:14.512 fused_ordering(787) 00:11:14.512 fused_ordering(788) 00:11:14.512 fused_ordering(789) 00:11:14.512 fused_ordering(790) 00:11:14.512 fused_ordering(791) 00:11:14.512 fused_ordering(792) 00:11:14.512 fused_ordering(793) 00:11:14.512 fused_ordering(794) 00:11:14.512 fused_ordering(795) 00:11:14.512 fused_ordering(796) 00:11:14.512 fused_ordering(797) 00:11:14.512 fused_ordering(798) 00:11:14.512 fused_ordering(799) 00:11:14.512 fused_ordering(800) 00:11:14.512 fused_ordering(801) 00:11:14.512 fused_ordering(802) 00:11:14.512 fused_ordering(803) 00:11:14.512 fused_ordering(804) 00:11:14.512 fused_ordering(805) 00:11:14.512 fused_ordering(806) 00:11:14.512 fused_ordering(807) 00:11:14.512 fused_ordering(808) 00:11:14.512 fused_ordering(809) 00:11:14.512 fused_ordering(810) 00:11:14.512 fused_ordering(811) 00:11:14.512 fused_ordering(812) 00:11:14.512 fused_ordering(813) 00:11:14.512 fused_ordering(814) 00:11:14.512 fused_ordering(815) 00:11:14.512 fused_ordering(816) 00:11:14.512 fused_ordering(817) 00:11:14.512 fused_ordering(818) 00:11:14.512 fused_ordering(819) 00:11:14.512 fused_ordering(820) 00:11:15.082 fused_ordering(821) 00:11:15.082 fused_ordering(822) 00:11:15.082 fused_ordering(823) 00:11:15.082 fused_ordering(824) 00:11:15.082 fused_ordering(825) 00:11:15.082 fused_ordering(826) 00:11:15.082 fused_ordering(827) 00:11:15.082 fused_ordering(828) 00:11:15.082 fused_ordering(829) 00:11:15.082 fused_ordering(830) 00:11:15.082 fused_ordering(831) 00:11:15.082 fused_ordering(832) 00:11:15.082 fused_ordering(833) 00:11:15.082 fused_ordering(834) 00:11:15.082 fused_ordering(835) 00:11:15.082 fused_ordering(836) 00:11:15.082 fused_ordering(837) 00:11:15.082 fused_ordering(838) 00:11:15.082 fused_ordering(839) 00:11:15.082 fused_ordering(840) 00:11:15.082 fused_ordering(841) 00:11:15.082 fused_ordering(842) 00:11:15.082 fused_ordering(843) 00:11:15.082 fused_ordering(844) 00:11:15.082 fused_ordering(845) 00:11:15.082 fused_ordering(846) 00:11:15.082 fused_ordering(847) 00:11:15.082 fused_ordering(848) 00:11:15.082 fused_ordering(849) 00:11:15.082 fused_ordering(850) 00:11:15.082 fused_ordering(851) 00:11:15.082 fused_ordering(852) 00:11:15.082 fused_ordering(853) 00:11:15.082 fused_ordering(854) 00:11:15.082 fused_ordering(855) 00:11:15.082 fused_ordering(856) 00:11:15.082 fused_ordering(857) 00:11:15.082 fused_ordering(858) 00:11:15.082 fused_ordering(859) 00:11:15.082 fused_ordering(860) 00:11:15.082 fused_ordering(861) 00:11:15.082 fused_ordering(862) 00:11:15.082 fused_ordering(863) 00:11:15.082 fused_ordering(864) 00:11:15.082 fused_ordering(865) 00:11:15.082 fused_ordering(866) 00:11:15.082 fused_ordering(867) 00:11:15.082 fused_ordering(868) 00:11:15.082 fused_ordering(869) 00:11:15.082 fused_ordering(870) 00:11:15.082 fused_ordering(871) 00:11:15.082 fused_ordering(872) 00:11:15.082 fused_ordering(873) 00:11:15.082 fused_ordering(874) 00:11:15.082 fused_ordering(875) 00:11:15.082 fused_ordering(876) 00:11:15.082 fused_ordering(877) 00:11:15.082 fused_ordering(878) 00:11:15.082 fused_ordering(879) 00:11:15.082 fused_ordering(880) 00:11:15.082 fused_ordering(881) 00:11:15.082 fused_ordering(882) 00:11:15.082 fused_ordering(883) 00:11:15.082 fused_ordering(884) 00:11:15.082 fused_ordering(885) 00:11:15.082 fused_ordering(886) 00:11:15.082 fused_ordering(887) 00:11:15.082 fused_ordering(888) 00:11:15.082 fused_ordering(889) 00:11:15.082 fused_ordering(890) 00:11:15.082 fused_ordering(891) 00:11:15.082 fused_ordering(892) 00:11:15.082 fused_ordering(893) 00:11:15.082 fused_ordering(894) 00:11:15.082 fused_ordering(895) 00:11:15.082 fused_ordering(896) 00:11:15.082 fused_ordering(897) 00:11:15.082 fused_ordering(898) 00:11:15.082 fused_ordering(899) 00:11:15.082 fused_ordering(900) 00:11:15.082 fused_ordering(901) 00:11:15.082 fused_ordering(902) 00:11:15.082 fused_ordering(903) 00:11:15.082 fused_ordering(904) 00:11:15.082 fused_ordering(905) 00:11:15.082 fused_ordering(906) 00:11:15.082 fused_ordering(907) 00:11:15.082 fused_ordering(908) 00:11:15.082 fused_ordering(909) 00:11:15.082 fused_ordering(910) 00:11:15.082 fused_ordering(911) 00:11:15.082 fused_ordering(912) 00:11:15.082 fused_ordering(913) 00:11:15.082 fused_ordering(914) 00:11:15.082 fused_ordering(915) 00:11:15.082 fused_ordering(916) 00:11:15.082 fused_ordering(917) 00:11:15.082 fused_ordering(918) 00:11:15.082 fused_ordering(919) 00:11:15.082 fused_ordering(920) 00:11:15.082 fused_ordering(921) 00:11:15.082 fused_ordering(922) 00:11:15.082 fused_ordering(923) 00:11:15.082 fused_ordering(924) 00:11:15.082 fused_ordering(925) 00:11:15.082 fused_ordering(926) 00:11:15.082 fused_ordering(927) 00:11:15.082 fused_ordering(928) 00:11:15.082 fused_ordering(929) 00:11:15.082 fused_ordering(930) 00:11:15.082 fused_ordering(931) 00:11:15.082 fused_ordering(932) 00:11:15.082 fused_ordering(933) 00:11:15.082 fused_ordering(934) 00:11:15.082 fused_ordering(935) 00:11:15.082 fused_ordering(936) 00:11:15.082 fused_ordering(937) 00:11:15.082 fused_ordering(938) 00:11:15.082 fused_ordering(939) 00:11:15.082 fused_ordering(940) 00:11:15.082 fused_ordering(941) 00:11:15.082 fused_ordering(942) 00:11:15.082 fused_ordering(943) 00:11:15.082 fused_ordering(944) 00:11:15.083 fused_ordering(945) 00:11:15.083 fused_ordering(946) 00:11:15.083 fused_ordering(947) 00:11:15.083 fused_ordering(948) 00:11:15.083 fused_ordering(949) 00:11:15.083 fused_ordering(950) 00:11:15.083 fused_ordering(951) 00:11:15.083 fused_ordering(952) 00:11:15.083 fused_ordering(953) 00:11:15.083 fused_ordering(954) 00:11:15.083 fused_ordering(955) 00:11:15.083 fused_ordering(956) 00:11:15.083 fused_ordering(957) 00:11:15.083 fused_ordering(958) 00:11:15.083 fused_ordering(959) 00:11:15.083 fused_ordering(960) 00:11:15.083 fused_ordering(961) 00:11:15.083 fused_ordering(962) 00:11:15.083 fused_ordering(963) 00:11:15.083 fused_ordering(964) 00:11:15.083 fused_ordering(965) 00:11:15.083 fused_ordering(966) 00:11:15.083 fused_ordering(967) 00:11:15.083 fused_ordering(968) 00:11:15.083 fused_ordering(969) 00:11:15.083 fused_ordering(970) 00:11:15.083 fused_ordering(971) 00:11:15.083 fused_ordering(972) 00:11:15.083 fused_ordering(973) 00:11:15.083 fused_ordering(974) 00:11:15.083 fused_ordering(975) 00:11:15.083 fused_ordering(976) 00:11:15.083 fused_ordering(977) 00:11:15.083 fused_ordering(978) 00:11:15.083 fused_ordering(979) 00:11:15.083 fused_ordering(980) 00:11:15.083 fused_ordering(981) 00:11:15.083 fused_ordering(982) 00:11:15.083 fused_ordering(983) 00:11:15.083 fused_ordering(984) 00:11:15.083 fused_ordering(985) 00:11:15.083 fused_ordering(986) 00:11:15.083 fused_ordering(987) 00:11:15.083 fused_ordering(988) 00:11:15.083 fused_ordering(989) 00:11:15.083 fused_ordering(990) 00:11:15.083 fused_ordering(991) 00:11:15.083 fused_ordering(992) 00:11:15.083 fused_ordering(993) 00:11:15.083 fused_ordering(994) 00:11:15.083 fused_ordering(995) 00:11:15.083 fused_ordering(996) 00:11:15.083 fused_ordering(997) 00:11:15.083 fused_ordering(998) 00:11:15.083 fused_ordering(999) 00:11:15.083 fused_ordering(1000) 00:11:15.083 fused_ordering(1001) 00:11:15.083 fused_ordering(1002) 00:11:15.083 fused_ordering(1003) 00:11:15.083 fused_ordering(1004) 00:11:15.083 fused_ordering(1005) 00:11:15.083 fused_ordering(1006) 00:11:15.083 fused_ordering(1007) 00:11:15.083 fused_ordering(1008) 00:11:15.083 fused_ordering(1009) 00:11:15.083 fused_ordering(1010) 00:11:15.083 fused_ordering(1011) 00:11:15.083 fused_ordering(1012) 00:11:15.083 fused_ordering(1013) 00:11:15.083 fused_ordering(1014) 00:11:15.083 fused_ordering(1015) 00:11:15.083 fused_ordering(1016) 00:11:15.083 fused_ordering(1017) 00:11:15.083 fused_ordering(1018) 00:11:15.083 fused_ordering(1019) 00:11:15.083 fused_ordering(1020) 00:11:15.083 fused_ordering(1021) 00:11:15.083 fused_ordering(1022) 00:11:15.083 fused_ordering(1023) 00:11:15.083 20:42:39 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:15.083 20:42:39 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:15.083 20:42:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:15.083 20:42:39 -- nvmf/common.sh@117 -- # sync 00:11:15.083 20:42:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:15.083 20:42:39 -- nvmf/common.sh@120 -- # set +e 00:11:15.083 20:42:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:15.083 20:42:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:15.083 rmmod nvme_tcp 00:11:15.083 rmmod nvme_fabrics 00:11:15.083 rmmod nvme_keyring 00:11:15.083 20:42:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:15.083 20:42:39 -- nvmf/common.sh@124 -- # set -e 00:11:15.083 20:42:39 -- nvmf/common.sh@125 -- # return 0 00:11:15.083 20:42:39 -- nvmf/common.sh@478 -- # '[' -n 2673876 ']' 00:11:15.083 20:42:39 -- nvmf/common.sh@479 -- # killprocess 2673876 00:11:15.083 20:42:39 -- common/autotest_common.sh@936 -- # '[' -z 2673876 ']' 00:11:15.083 20:42:39 -- common/autotest_common.sh@940 -- # kill -0 2673876 00:11:15.083 20:42:39 -- common/autotest_common.sh@941 -- # uname 00:11:15.083 20:42:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:15.083 20:42:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2673876 00:11:15.083 20:42:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:15.083 20:42:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:15.083 20:42:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2673876' 00:11:15.083 killing process with pid 2673876 00:11:15.083 20:42:39 -- common/autotest_common.sh@955 -- # kill 2673876 00:11:15.083 20:42:39 -- common/autotest_common.sh@960 -- # wait 2673876 00:11:15.344 20:42:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:15.344 20:42:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:15.344 20:42:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:15.344 20:42:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:15.344 20:42:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:15.344 20:42:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.344 20:42:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.344 20:42:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.272 20:42:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:17.272 00:11:17.272 real 0m13.214s 00:11:17.272 user 0m7.248s 00:11:17.272 sys 0m6.749s 00:11:17.272 20:42:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:17.272 20:42:41 -- common/autotest_common.sh@10 -- # set +x 00:11:17.272 ************************************ 00:11:17.272 END TEST nvmf_fused_ordering 00:11:17.272 ************************************ 00:11:17.272 20:42:41 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:17.272 20:42:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:17.272 20:42:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:17.272 20:42:41 -- common/autotest_common.sh@10 -- # set +x 00:11:17.533 ************************************ 00:11:17.533 START TEST nvmf_delete_subsystem 00:11:17.533 ************************************ 00:11:17.533 20:42:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:17.533 * Looking for test storage... 00:11:17.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.533 20:42:42 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.533 20:42:42 -- nvmf/common.sh@7 -- # uname -s 00:11:17.533 20:42:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.533 20:42:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.533 20:42:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.533 20:42:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.533 20:42:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.533 20:42:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.533 20:42:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.794 20:42:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.794 20:42:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.794 20:42:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.794 20:42:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:17.794 20:42:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:17.794 20:42:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.794 20:42:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.794 20:42:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.794 20:42:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.794 20:42:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.794 20:42:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.794 20:42:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.794 20:42:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.794 20:42:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.794 20:42:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.794 20:42:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.794 20:42:42 -- paths/export.sh@5 -- # export PATH 00:11:17.794 20:42:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.794 20:42:42 -- nvmf/common.sh@47 -- # : 0 00:11:17.794 20:42:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.794 20:42:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.794 20:42:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.794 20:42:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.794 20:42:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.794 20:42:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.794 20:42:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.794 20:42:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.794 20:42:42 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:17.795 20:42:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:17.795 20:42:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.795 20:42:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:17.795 20:42:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:17.795 20:42:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:17.795 20:42:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.795 20:42:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.795 20:42:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.795 20:42:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:17.795 20:42:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:17.795 20:42:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:17.795 20:42:42 -- common/autotest_common.sh@10 -- # set +x 00:11:24.386 20:42:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:24.386 20:42:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:24.386 20:42:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:24.386 20:42:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:24.386 20:42:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:24.386 20:42:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:24.386 20:42:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:24.386 20:42:48 -- nvmf/common.sh@295 -- # net_devs=() 00:11:24.386 20:42:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:24.386 20:42:48 -- nvmf/common.sh@296 -- # e810=() 00:11:24.386 20:42:48 -- nvmf/common.sh@296 -- # local -ga e810 00:11:24.386 20:42:48 -- nvmf/common.sh@297 -- # x722=() 00:11:24.386 20:42:48 -- nvmf/common.sh@297 -- # local -ga x722 00:11:24.386 20:42:48 -- nvmf/common.sh@298 -- # mlx=() 00:11:24.386 20:42:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:24.386 20:42:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:24.386 20:42:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:24.386 20:42:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:24.386 20:42:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:24.386 20:42:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:24.386 20:42:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:24.386 20:42:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:24.386 20:42:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:24.386 20:42:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:24.386 20:42:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:24.386 20:42:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:24.386 20:42:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:24.386 20:42:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:24.386 20:42:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:24.386 20:42:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.386 20:42:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:24.386 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:24.386 20:42:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:24.386 20:42:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:24.386 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:24.386 20:42:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:24.386 20:42:48 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.386 20:42:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.386 20:42:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:24.386 20:42:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.386 20:42:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:24.386 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:24.386 20:42:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.386 20:42:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:24.386 20:42:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:24.386 20:42:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:24.386 20:42:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:24.386 20:42:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:24.386 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:24.386 20:42:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:24.386 20:42:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:24.386 20:42:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:24.386 20:42:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:24.386 20:42:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:24.386 20:42:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:24.386 20:42:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:24.386 20:42:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:24.386 20:42:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:24.386 20:42:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:24.386 20:42:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:24.386 20:42:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:24.386 20:42:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:24.386 20:42:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:24.386 20:42:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:24.386 20:42:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:24.386 20:42:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:24.386 20:42:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:24.386 20:42:48 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:24.386 20:42:48 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:24.386 20:42:48 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:24.386 20:42:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:24.386 20:42:48 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:24.386 20:42:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:24.386 20:42:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:24.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:24.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:11:24.386 00:11:24.386 --- 10.0.0.2 ping statistics --- 00:11:24.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.386 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:11:24.386 20:42:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:24.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:24.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:11:24.649 00:11:24.649 --- 10.0.0.1 ping statistics --- 00:11:24.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:24.649 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:11:24.649 20:42:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:24.649 20:42:49 -- nvmf/common.sh@411 -- # return 0 00:11:24.649 20:42:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:24.649 20:42:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:24.649 20:42:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:24.649 20:42:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:24.649 20:42:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:24.649 20:42:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:24.649 20:42:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:24.649 20:42:49 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:24.649 20:42:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:24.649 20:42:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:24.649 20:42:49 -- common/autotest_common.sh@10 -- # set +x 00:11:24.649 20:42:49 -- nvmf/common.sh@470 -- # nvmfpid=2678580 00:11:24.649 20:42:49 -- nvmf/common.sh@471 -- # waitforlisten 2678580 00:11:24.649 20:42:49 -- common/autotest_common.sh@817 -- # '[' -z 2678580 ']' 00:11:24.649 20:42:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.649 20:42:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:24.649 20:42:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.649 20:42:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:24.649 20:42:49 -- common/autotest_common.sh@10 -- # set +x 00:11:24.649 20:42:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:24.649 [2024-04-24 20:42:49.124676] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:11:24.649 [2024-04-24 20:42:49.124748] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.649 EAL: No free 2048 kB hugepages reported on node 1 00:11:24.649 [2024-04-24 20:42:49.210688] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:24.909 [2024-04-24 20:42:49.303792] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:24.909 [2024-04-24 20:42:49.303851] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:24.909 [2024-04-24 20:42:49.303859] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:24.909 [2024-04-24 20:42:49.303866] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:24.909 [2024-04-24 20:42:49.303873] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:24.909 [2024-04-24 20:42:49.303955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:24.909 [2024-04-24 20:42:49.303960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.481 20:42:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:25.481 20:42:49 -- common/autotest_common.sh@850 -- # return 0 00:11:25.481 20:42:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:25.481 20:42:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:25.481 20:42:49 -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 20:42:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.481 20:42:50 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:25.481 20:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.481 20:42:50 -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 [2024-04-24 20:42:50.030897] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:25.481 20:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.481 20:42:50 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:25.481 20:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.481 20:42:50 -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 20:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.481 20:42:50 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.481 20:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.481 20:42:50 -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 [2024-04-24 20:42:50.047065] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.481 20:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.481 20:42:50 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:25.481 20:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.481 20:42:50 -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 NULL1 00:11:25.481 20:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.481 20:42:50 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:25.481 20:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.481 20:42:50 -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 Delay0 00:11:25.481 20:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.481 20:42:50 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.481 20:42:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:25.481 20:42:50 -- common/autotest_common.sh@10 -- # set +x 00:11:25.481 20:42:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:25.481 20:42:50 -- target/delete_subsystem.sh@28 -- # perf_pid=2678925 00:11:25.481 20:42:50 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:25.481 20:42:50 -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:25.481 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.743 [2024-04-24 20:42:50.131668] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:27.652 20:42:52 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.652 20:42:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:27.652 20:42:52 -- common/autotest_common.sh@10 -- # set +x 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 [2024-04-24 20:42:52.256018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f828b0 is same with the state(5) to be set 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 starting I/O failed: -6 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 [2024-04-24 20:42:52.259853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f07c400c3d0 is same with the state(5) to be set 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Write completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.652 Read completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Write completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Write completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Write completed with error (sct=0, sc=8) 00:11:27.653 Write completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Write completed with error (sct=0, sc=8) 00:11:27.653 Write completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:27.653 Read completed with error (sct=0, sc=8) 00:11:28.595 [2024-04-24 20:42:53.226670] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f820e0 is same with the state(5) to be set 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 [2024-04-24 20:42:53.258457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8add0 is same with the state(5) to be set 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 [2024-04-24 20:42:53.258614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f82720 is same with the state(5) to be set 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 [2024-04-24 20:42:53.261883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f07c400c690 is same with the state(5) to be set 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Write completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.854 Read completed with error (sct=0, sc=8) 00:11:28.855 Read completed with error (sct=0, sc=8) 00:11:28.855 Write completed with error (sct=0, sc=8) 00:11:28.855 Read completed with error (sct=0, sc=8) 00:11:28.855 Write completed with error (sct=0, sc=8) 00:11:28.855 Read completed with error (sct=0, sc=8) 00:11:28.855 Write completed with error (sct=0, sc=8) 00:11:28.855 Read completed with error (sct=0, sc=8) 00:11:28.855 Write completed with error (sct=0, sc=8) 00:11:28.855 Read completed with error (sct=0, sc=8) 00:11:28.855 [2024-04-24 20:42:53.262201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f07c400bf90 is same with the state(5) to be set 00:11:28.855 [2024-04-24 20:42:53.262644] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f820e0 (9): Bad file descriptor 00:11:28.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:28.855 Initializing NVMe Controllers 00:11:28.855 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:28.855 Controller IO queue size 128, less than required. 00:11:28.855 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:28.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:28.855 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:28.855 Initialization complete. Launching workers. 00:11:28.855 ======================================================== 00:11:28.855 Latency(us) 00:11:28.855 Device Information : IOPS MiB/s Average min max 00:11:28.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.87 0.08 894663.50 295.79 1006299.91 00:11:28.855 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.90 0.08 909690.99 259.85 1010026.30 00:11:28.855 ======================================================== 00:11:28.855 Total : 332.77 0.16 902019.77 259.85 1010026.30 00:11:28.855 00:11:28.855 20:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.855 20:42:53 -- target/delete_subsystem.sh@34 -- # delay=0 00:11:28.855 20:42:53 -- target/delete_subsystem.sh@35 -- # kill -0 2678925 00:11:28.855 20:42:53 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:29.425 20:42:53 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:29.425 20:42:53 -- target/delete_subsystem.sh@35 -- # kill -0 2678925 00:11:29.425 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2678925) - No such process 00:11:29.425 20:42:53 -- target/delete_subsystem.sh@45 -- # NOT wait 2678925 00:11:29.425 20:42:53 -- common/autotest_common.sh@638 -- # local es=0 00:11:29.425 20:42:53 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 2678925 00:11:29.425 20:42:53 -- common/autotest_common.sh@626 -- # local arg=wait 00:11:29.425 20:42:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:29.425 20:42:53 -- common/autotest_common.sh@630 -- # type -t wait 00:11:29.425 20:42:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:29.425 20:42:53 -- common/autotest_common.sh@641 -- # wait 2678925 00:11:29.425 20:42:53 -- common/autotest_common.sh@641 -- # es=1 00:11:29.425 20:42:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:29.425 20:42:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:29.425 20:42:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:29.425 20:42:53 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:29.425 20:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.425 20:42:53 -- common/autotest_common.sh@10 -- # set +x 00:11:29.425 20:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.425 20:42:53 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.425 20:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.425 20:42:53 -- common/autotest_common.sh@10 -- # set +x 00:11:29.425 [2024-04-24 20:42:53.795498] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.425 20:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.425 20:42:53 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.425 20:42:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:29.425 20:42:53 -- common/autotest_common.sh@10 -- # set +x 00:11:29.425 20:42:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:29.425 20:42:53 -- target/delete_subsystem.sh@54 -- # perf_pid=2679597 00:11:29.425 20:42:53 -- target/delete_subsystem.sh@56 -- # delay=0 00:11:29.425 20:42:53 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:29.425 20:42:53 -- target/delete_subsystem.sh@57 -- # kill -0 2679597 00:11:29.425 20:42:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:29.425 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.425 [2024-04-24 20:42:53.862206] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:29.685 20:42:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:29.685 20:42:54 -- target/delete_subsystem.sh@57 -- # kill -0 2679597 00:11:29.685 20:42:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:30.275 20:42:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:30.275 20:42:54 -- target/delete_subsystem.sh@57 -- # kill -0 2679597 00:11:30.275 20:42:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:30.845 20:42:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:30.845 20:42:55 -- target/delete_subsystem.sh@57 -- # kill -0 2679597 00:11:30.845 20:42:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:31.416 20:42:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:31.416 20:42:55 -- target/delete_subsystem.sh@57 -- # kill -0 2679597 00:11:31.416 20:42:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:31.987 20:42:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:31.987 20:42:56 -- target/delete_subsystem.sh@57 -- # kill -0 2679597 00:11:31.987 20:42:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:32.248 20:42:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:32.248 20:42:56 -- target/delete_subsystem.sh@57 -- # kill -0 2679597 00:11:32.248 20:42:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:32.510 Initializing NVMe Controllers 00:11:32.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:32.510 Controller IO queue size 128, less than required. 00:11:32.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:32.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:32.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:32.510 Initialization complete. Launching workers. 00:11:32.510 ======================================================== 00:11:32.510 Latency(us) 00:11:32.510 Device Information : IOPS MiB/s Average min max 00:11:32.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002353.90 1000188.83 1041998.34 00:11:32.510 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004148.98 1000291.98 1041621.82 00:11:32.510 ======================================================== 00:11:32.510 Total : 256.00 0.12 1003251.44 1000188.83 1041998.34 00:11:32.510 00:11:32.770 20:42:57 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:32.770 20:42:57 -- target/delete_subsystem.sh@57 -- # kill -0 2679597 00:11:32.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2679597) - No such process 00:11:32.770 20:42:57 -- target/delete_subsystem.sh@67 -- # wait 2679597 00:11:32.770 20:42:57 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:32.770 20:42:57 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:32.770 20:42:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:32.770 20:42:57 -- nvmf/common.sh@117 -- # sync 00:11:32.770 20:42:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.770 20:42:57 -- nvmf/common.sh@120 -- # set +e 00:11:32.770 20:42:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.770 20:42:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.770 rmmod nvme_tcp 00:11:32.770 rmmod nvme_fabrics 00:11:32.770 rmmod nvme_keyring 00:11:32.770 20:42:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.770 20:42:57 -- nvmf/common.sh@124 -- # set -e 00:11:32.770 20:42:57 -- nvmf/common.sh@125 -- # return 0 00:11:33.031 20:42:57 -- nvmf/common.sh@478 -- # '[' -n 2678580 ']' 00:11:33.031 20:42:57 -- nvmf/common.sh@479 -- # killprocess 2678580 00:11:33.031 20:42:57 -- common/autotest_common.sh@936 -- # '[' -z 2678580 ']' 00:11:33.031 20:42:57 -- common/autotest_common.sh@940 -- # kill -0 2678580 00:11:33.031 20:42:57 -- common/autotest_common.sh@941 -- # uname 00:11:33.031 20:42:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:33.031 20:42:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2678580 00:11:33.031 20:42:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:33.031 20:42:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:33.031 20:42:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2678580' 00:11:33.031 killing process with pid 2678580 00:11:33.031 20:42:57 -- common/autotest_common.sh@955 -- # kill 2678580 00:11:33.031 20:42:57 -- common/autotest_common.sh@960 -- # wait 2678580 00:11:33.031 20:42:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:33.031 20:42:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:33.031 20:42:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:33.031 20:42:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:33.031 20:42:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:33.031 20:42:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.031 20:42:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.031 20:42:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.583 20:42:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:35.583 00:11:35.583 real 0m17.612s 00:11:35.583 user 0m30.873s 00:11:35.583 sys 0m5.981s 00:11:35.583 20:42:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:35.583 20:42:59 -- common/autotest_common.sh@10 -- # set +x 00:11:35.583 ************************************ 00:11:35.583 END TEST nvmf_delete_subsystem 00:11:35.583 ************************************ 00:11:35.583 20:42:59 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:35.583 20:42:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:35.583 20:42:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:35.583 20:42:59 -- common/autotest_common.sh@10 -- # set +x 00:11:35.583 ************************************ 00:11:35.583 START TEST nvmf_ns_masking 00:11:35.583 ************************************ 00:11:35.583 20:42:59 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:35.583 * Looking for test storage... 00:11:35.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.583 20:42:59 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.583 20:42:59 -- nvmf/common.sh@7 -- # uname -s 00:11:35.583 20:42:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.583 20:42:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.583 20:42:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.583 20:42:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.583 20:42:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.583 20:42:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.583 20:42:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.583 20:42:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.583 20:42:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.583 20:42:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.583 20:42:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:35.583 20:42:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:35.583 20:42:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.583 20:42:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.583 20:42:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.583 20:42:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.583 20:42:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.583 20:43:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.583 20:43:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.583 20:43:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.583 20:43:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.583 20:43:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.583 20:43:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.583 20:43:00 -- paths/export.sh@5 -- # export PATH 00:11:35.583 20:43:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.583 20:43:00 -- nvmf/common.sh@47 -- # : 0 00:11:35.583 20:43:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.583 20:43:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.583 20:43:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.583 20:43:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.583 20:43:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.583 20:43:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.583 20:43:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.583 20:43:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.583 20:43:00 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.583 20:43:00 -- target/ns_masking.sh@11 -- # loops=5 00:11:35.583 20:43:00 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:35.583 20:43:00 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:11:35.583 20:43:00 -- target/ns_masking.sh@15 -- # uuidgen 00:11:35.583 20:43:00 -- target/ns_masking.sh@15 -- # HOSTID=6cb67d40-cd29-4b7f-afdb-9928707ea0f3 00:11:35.583 20:43:00 -- target/ns_masking.sh@44 -- # nvmftestinit 00:11:35.583 20:43:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:35.583 20:43:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.583 20:43:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:35.583 20:43:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:35.583 20:43:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:35.583 20:43:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.584 20:43:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.584 20:43:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.584 20:43:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:35.584 20:43:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:35.584 20:43:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:35.584 20:43:00 -- common/autotest_common.sh@10 -- # set +x 00:11:43.731 20:43:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:43.731 20:43:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.731 20:43:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.731 20:43:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.731 20:43:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.731 20:43:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.731 20:43:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.731 20:43:06 -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.731 20:43:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.731 20:43:06 -- nvmf/common.sh@296 -- # e810=() 00:11:43.731 20:43:06 -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.731 20:43:06 -- nvmf/common.sh@297 -- # x722=() 00:11:43.731 20:43:06 -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.731 20:43:06 -- nvmf/common.sh@298 -- # mlx=() 00:11:43.731 20:43:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.731 20:43:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.731 20:43:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.731 20:43:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.731 20:43:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.731 20:43:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.731 20:43:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.731 20:43:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.731 20:43:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.731 20:43:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.731 20:43:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.731 20:43:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.731 20:43:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.731 20:43:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:43.731 20:43:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.731 20:43:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.731 20:43:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:43.731 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:43.731 20:43:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.731 20:43:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:43.731 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:43.731 20:43:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.731 20:43:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:43.731 20:43:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.731 20:43:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.731 20:43:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:43.731 20:43:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.731 20:43:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:43.731 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:43.731 20:43:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.731 20:43:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.731 20:43:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.731 20:43:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:43.732 20:43:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.732 20:43:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:43.732 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:43.732 20:43:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.732 20:43:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:43.732 20:43:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:43.732 20:43:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:43.732 20:43:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:43.732 20:43:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:43.732 20:43:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.732 20:43:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.732 20:43:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.732 20:43:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:43.732 20:43:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.732 20:43:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.732 20:43:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:43.732 20:43:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.732 20:43:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.732 20:43:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:43.732 20:43:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:43.732 20:43:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.732 20:43:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.732 20:43:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.732 20:43:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.732 20:43:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.732 20:43:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.732 20:43:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.732 20:43:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.732 20:43:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:43.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:11:43.732 00:11:43.732 --- 10.0.0.2 ping statistics --- 00:11:43.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.732 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:11:43.732 20:43:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:11:43.732 00:11:43.732 --- 10.0.0.1 ping statistics --- 00:11:43.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.732 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:11:43.732 20:43:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.732 20:43:07 -- nvmf/common.sh@411 -- # return 0 00:11:43.732 20:43:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:43.732 20:43:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.732 20:43:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:43.732 20:43:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:43.732 20:43:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.732 20:43:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:43.732 20:43:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:43.732 20:43:07 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:11:43.732 20:43:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:43.732 20:43:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:43.732 20:43:07 -- common/autotest_common.sh@10 -- # set +x 00:11:43.732 20:43:07 -- nvmf/common.sh@470 -- # nvmfpid=2684602 00:11:43.732 20:43:07 -- nvmf/common.sh@471 -- # waitforlisten 2684602 00:11:43.732 20:43:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:43.732 20:43:07 -- common/autotest_common.sh@817 -- # '[' -z 2684602 ']' 00:11:43.732 20:43:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.732 20:43:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:43.732 20:43:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.732 20:43:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:43.732 20:43:07 -- common/autotest_common.sh@10 -- # set +x 00:11:43.732 [2024-04-24 20:43:07.351799] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:11:43.732 [2024-04-24 20:43:07.351894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.732 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.732 [2024-04-24 20:43:07.429543] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.732 [2024-04-24 20:43:07.503519] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.732 [2024-04-24 20:43:07.503561] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.732 [2024-04-24 20:43:07.503573] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.732 [2024-04-24 20:43:07.503581] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.732 [2024-04-24 20:43:07.503588] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.732 [2024-04-24 20:43:07.503651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.732 [2024-04-24 20:43:07.503723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.732 [2024-04-24 20:43:07.503873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.732 [2024-04-24 20:43:07.504007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.732 20:43:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:43.732 20:43:08 -- common/autotest_common.sh@850 -- # return 0 00:11:43.732 20:43:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:43.732 20:43:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:43.732 20:43:08 -- common/autotest_common.sh@10 -- # set +x 00:11:43.732 20:43:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.732 20:43:08 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:43.993 [2024-04-24 20:43:08.450204] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.993 20:43:08 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:11:43.993 20:43:08 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:11:43.993 20:43:08 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:44.254 Malloc1 00:11:44.254 20:43:08 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:44.514 Malloc2 00:11:44.514 20:43:08 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:44.514 20:43:09 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:44.774 20:43:09 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.035 [2024-04-24 20:43:09.539401] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.035 20:43:09 -- target/ns_masking.sh@61 -- # connect 00:11:45.035 20:43:09 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6cb67d40-cd29-4b7f-afdb-9928707ea0f3 -a 10.0.0.2 -s 4420 -i 4 00:11:45.295 20:43:09 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.295 20:43:09 -- common/autotest_common.sh@1184 -- # local i=0 00:11:45.295 20:43:09 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.295 20:43:09 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:45.295 20:43:09 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:47.208 20:43:11 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:47.208 20:43:11 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:47.208 20:43:11 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.208 20:43:11 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:47.208 20:43:11 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.208 20:43:11 -- common/autotest_common.sh@1194 -- # return 0 00:11:47.208 20:43:11 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:47.208 20:43:11 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:47.469 20:43:11 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:47.469 20:43:11 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:47.469 20:43:11 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:11:47.469 20:43:11 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.469 20:43:11 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:47.469 [ 0]:0x1 00:11:47.469 20:43:11 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.469 20:43:11 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.469 20:43:11 -- target/ns_masking.sh@40 -- # nguid=d0f4ffe9d9174d91ac340f529f7b7099 00:11:47.469 20:43:11 -- target/ns_masking.sh@41 -- # [[ d0f4ffe9d9174d91ac340f529f7b7099 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.469 20:43:11 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:47.730 20:43:12 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:11:47.730 20:43:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.730 20:43:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:47.730 [ 0]:0x1 00:11:47.730 20:43:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.730 20:43:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.730 20:43:12 -- target/ns_masking.sh@40 -- # nguid=d0f4ffe9d9174d91ac340f529f7b7099 00:11:47.730 20:43:12 -- target/ns_masking.sh@41 -- # [[ d0f4ffe9d9174d91ac340f529f7b7099 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.730 20:43:12 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:11:47.730 20:43:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:47.730 20:43:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:47.730 [ 1]:0x2 00:11:47.730 20:43:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.730 20:43:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:47.730 20:43:12 -- target/ns_masking.sh@40 -- # nguid=e5c96b4f58764a36ab6793e1278bf050 00:11:47.730 20:43:12 -- target/ns_masking.sh@41 -- # [[ e5c96b4f58764a36ab6793e1278bf050 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.730 20:43:12 -- target/ns_masking.sh@69 -- # disconnect 00:11:47.730 20:43:12 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.730 20:43:12 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.991 20:43:12 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:48.252 20:43:12 -- target/ns_masking.sh@77 -- # connect 1 00:11:48.252 20:43:12 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6cb67d40-cd29-4b7f-afdb-9928707ea0f3 -a 10.0.0.2 -s 4420 -i 4 00:11:48.252 20:43:12 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:48.252 20:43:12 -- common/autotest_common.sh@1184 -- # local i=0 00:11:48.252 20:43:12 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.252 20:43:12 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:11:48.252 20:43:12 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:11:48.252 20:43:12 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:50.798 20:43:14 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:50.798 20:43:14 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:50.798 20:43:14 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.798 20:43:14 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:50.798 20:43:14 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.798 20:43:14 -- common/autotest_common.sh@1194 -- # return 0 00:11:50.798 20:43:14 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:50.798 20:43:14 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:50.798 20:43:14 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:50.798 20:43:14 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:50.798 20:43:14 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:11:50.798 20:43:14 -- common/autotest_common.sh@638 -- # local es=0 00:11:50.798 20:43:14 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:50.798 20:43:14 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:50.798 20:43:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:50.798 20:43:14 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:50.798 20:43:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:50.798 20:43:14 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:50.798 20:43:14 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:50.798 20:43:14 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:50.798 20:43:14 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:50.798 20:43:14 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:50.798 20:43:15 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:50.798 20:43:15 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.798 20:43:15 -- common/autotest_common.sh@641 -- # es=1 00:11:50.798 20:43:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:50.798 20:43:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:50.799 20:43:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:50.799 20:43:15 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:11:50.799 20:43:15 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:50.799 20:43:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:50.799 [ 0]:0x2 00:11:50.799 20:43:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:50.799 20:43:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:50.799 20:43:15 -- target/ns_masking.sh@40 -- # nguid=e5c96b4f58764a36ab6793e1278bf050 00:11:50.799 20:43:15 -- target/ns_masking.sh@41 -- # [[ e5c96b4f58764a36ab6793e1278bf050 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.799 20:43:15 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:50.799 20:43:15 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:11:50.799 20:43:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:50.799 20:43:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:50.799 [ 0]:0x1 00:11:50.799 20:43:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:50.799 20:43:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:50.799 20:43:15 -- target/ns_masking.sh@40 -- # nguid=d0f4ffe9d9174d91ac340f529f7b7099 00:11:50.799 20:43:15 -- target/ns_masking.sh@41 -- # [[ d0f4ffe9d9174d91ac340f529f7b7099 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.799 20:43:15 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:11:50.799 20:43:15 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:50.799 20:43:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:50.799 [ 1]:0x2 00:11:50.799 20:43:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:50.799 20:43:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:50.799 20:43:15 -- target/ns_masking.sh@40 -- # nguid=e5c96b4f58764a36ab6793e1278bf050 00:11:50.799 20:43:15 -- target/ns_masking.sh@41 -- # [[ e5c96b4f58764a36ab6793e1278bf050 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:50.799 20:43:15 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.060 20:43:15 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:11:51.060 20:43:15 -- common/autotest_common.sh@638 -- # local es=0 00:11:51.060 20:43:15 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:51.060 20:43:15 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:51.060 20:43:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:51.060 20:43:15 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:51.060 20:43:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:51.060 20:43:15 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:51.060 20:43:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.060 20:43:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:51.060 20:43:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:51.060 20:43:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.060 20:43:15 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:51.060 20:43:15 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.060 20:43:15 -- common/autotest_common.sh@641 -- # es=1 00:11:51.060 20:43:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:51.060 20:43:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:51.060 20:43:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:51.060 20:43:15 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:11:51.060 20:43:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:51.060 20:43:15 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:51.060 [ 0]:0x2 00:11:51.060 20:43:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:51.060 20:43:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:51.321 20:43:15 -- target/ns_masking.sh@40 -- # nguid=e5c96b4f58764a36ab6793e1278bf050 00:11:51.321 20:43:15 -- target/ns_masking.sh@41 -- # [[ e5c96b4f58764a36ab6793e1278bf050 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:51.321 20:43:15 -- target/ns_masking.sh@91 -- # disconnect 00:11:51.321 20:43:15 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:51.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.321 20:43:15 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:51.582 20:43:15 -- target/ns_masking.sh@95 -- # connect 2 00:11:51.582 20:43:15 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6cb67d40-cd29-4b7f-afdb-9928707ea0f3 -a 10.0.0.2 -s 4420 -i 4 00:11:51.582 20:43:16 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:51.582 20:43:16 -- common/autotest_common.sh@1184 -- # local i=0 00:11:51.582 20:43:16 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:51.582 20:43:16 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:11:51.582 20:43:16 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:11:51.582 20:43:16 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:54.175 20:43:18 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:54.175 20:43:18 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:54.175 20:43:18 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:54.175 20:43:18 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:11:54.175 20:43:18 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:54.175 20:43:18 -- common/autotest_common.sh@1194 -- # return 0 00:11:54.175 20:43:18 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:11:54.175 20:43:18 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:54.175 20:43:18 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:11:54.175 20:43:18 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:11:54.175 20:43:18 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:11:54.175 20:43:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.175 20:43:18 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:54.175 [ 0]:0x1 00:11:54.175 20:43:18 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.175 20:43:18 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.175 20:43:18 -- target/ns_masking.sh@40 -- # nguid=d0f4ffe9d9174d91ac340f529f7b7099 00:11:54.175 20:43:18 -- target/ns_masking.sh@41 -- # [[ d0f4ffe9d9174d91ac340f529f7b7099 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.175 20:43:18 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:11:54.175 20:43:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.175 20:43:18 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:54.175 [ 1]:0x2 00:11:54.175 20:43:18 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.175 20:43:18 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.175 20:43:18 -- target/ns_masking.sh@40 -- # nguid=e5c96b4f58764a36ab6793e1278bf050 00:11:54.175 20:43:18 -- target/ns_masking.sh@41 -- # [[ e5c96b4f58764a36ab6793e1278bf050 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.175 20:43:18 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:54.175 20:43:18 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:11:54.175 20:43:18 -- common/autotest_common.sh@638 -- # local es=0 00:11:54.175 20:43:18 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:54.175 20:43:18 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:54.175 20:43:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.175 20:43:18 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:54.175 20:43:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.175 20:43:18 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:54.175 20:43:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.175 20:43:18 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:54.175 20:43:18 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.175 20:43:18 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.437 20:43:18 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:54.437 20:43:18 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.437 20:43:18 -- common/autotest_common.sh@641 -- # es=1 00:11:54.437 20:43:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:54.437 20:43:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:54.437 20:43:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:54.437 20:43:18 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:11:54.437 20:43:18 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.437 20:43:18 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:54.437 [ 0]:0x2 00:11:54.437 20:43:18 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.437 20:43:18 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.437 20:43:18 -- target/ns_masking.sh@40 -- # nguid=e5c96b4f58764a36ab6793e1278bf050 00:11:54.437 20:43:18 -- target/ns_masking.sh@41 -- # [[ e5c96b4f58764a36ab6793e1278bf050 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.437 20:43:18 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.437 20:43:18 -- common/autotest_common.sh@638 -- # local es=0 00:11:54.437 20:43:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.437 20:43:18 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.437 20:43:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.437 20:43:18 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.437 20:43:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.437 20:43:18 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.437 20:43:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.437 20:43:18 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.437 20:43:18 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:54.437 20:43:18 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:54.700 [2024-04-24 20:43:19.093646] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:54.700 request: 00:11:54.700 { 00:11:54.700 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:54.700 "nsid": 2, 00:11:54.700 "host": "nqn.2016-06.io.spdk:host1", 00:11:54.700 "method": "nvmf_ns_remove_host", 00:11:54.700 "req_id": 1 00:11:54.700 } 00:11:54.700 Got JSON-RPC error response 00:11:54.700 response: 00:11:54.700 { 00:11:54.700 "code": -32602, 00:11:54.700 "message": "Invalid parameters" 00:11:54.700 } 00:11:54.700 20:43:19 -- common/autotest_common.sh@641 -- # es=1 00:11:54.700 20:43:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:54.700 20:43:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:54.700 20:43:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:54.700 20:43:19 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:11:54.700 20:43:19 -- common/autotest_common.sh@638 -- # local es=0 00:11:54.700 20:43:19 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:11:54.700 20:43:19 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:11:54.700 20:43:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.700 20:43:19 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:11:54.700 20:43:19 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:54.700 20:43:19 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:11:54.700 20:43:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.700 20:43:19 -- target/ns_masking.sh@39 -- # grep 0x1 00:11:54.700 20:43:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:54.700 20:43:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.700 20:43:19 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:11:54.700 20:43:19 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.700 20:43:19 -- common/autotest_common.sh@641 -- # es=1 00:11:54.700 20:43:19 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:54.700 20:43:19 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:54.700 20:43:19 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:54.700 20:43:19 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:11:54.700 20:43:19 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:11:54.700 20:43:19 -- target/ns_masking.sh@39 -- # grep 0x2 00:11:54.700 [ 0]:0x2 00:11:54.700 20:43:19 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:54.700 20:43:19 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:11:54.700 20:43:19 -- target/ns_masking.sh@40 -- # nguid=e5c96b4f58764a36ab6793e1278bf050 00:11:54.700 20:43:19 -- target/ns_masking.sh@41 -- # [[ e5c96b4f58764a36ab6793e1278bf050 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:54.700 20:43:19 -- target/ns_masking.sh@108 -- # disconnect 00:11:54.700 20:43:19 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:54.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.962 20:43:19 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.222 20:43:19 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:11:55.222 20:43:19 -- target/ns_masking.sh@114 -- # nvmftestfini 00:11:55.222 20:43:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:55.222 20:43:19 -- nvmf/common.sh@117 -- # sync 00:11:55.222 20:43:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:55.222 20:43:19 -- nvmf/common.sh@120 -- # set +e 00:11:55.222 20:43:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.222 20:43:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:55.222 rmmod nvme_tcp 00:11:55.222 rmmod nvme_fabrics 00:11:55.222 rmmod nvme_keyring 00:11:55.222 20:43:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.222 20:43:19 -- nvmf/common.sh@124 -- # set -e 00:11:55.222 20:43:19 -- nvmf/common.sh@125 -- # return 0 00:11:55.222 20:43:19 -- nvmf/common.sh@478 -- # '[' -n 2684602 ']' 00:11:55.222 20:43:19 -- nvmf/common.sh@479 -- # killprocess 2684602 00:11:55.222 20:43:19 -- common/autotest_common.sh@936 -- # '[' -z 2684602 ']' 00:11:55.222 20:43:19 -- common/autotest_common.sh@940 -- # kill -0 2684602 00:11:55.223 20:43:19 -- common/autotest_common.sh@941 -- # uname 00:11:55.223 20:43:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:55.223 20:43:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2684602 00:11:55.223 20:43:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:55.223 20:43:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:55.223 20:43:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2684602' 00:11:55.223 killing process with pid 2684602 00:11:55.223 20:43:19 -- common/autotest_common.sh@955 -- # kill 2684602 00:11:55.223 20:43:19 -- common/autotest_common.sh@960 -- # wait 2684602 00:11:55.484 20:43:19 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:55.484 20:43:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:55.484 20:43:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:55.484 20:43:19 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.484 20:43:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.484 20:43:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.484 20:43:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.484 20:43:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.399 20:43:21 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:57.399 00:11:57.399 real 0m22.118s 00:11:57.399 user 0m54.837s 00:11:57.399 sys 0m7.031s 00:11:57.399 20:43:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:57.399 20:43:21 -- common/autotest_common.sh@10 -- # set +x 00:11:57.399 ************************************ 00:11:57.399 END TEST nvmf_ns_masking 00:11:57.399 ************************************ 00:11:57.399 20:43:22 -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:57.399 20:43:22 -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:57.399 20:43:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:57.399 20:43:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:57.399 20:43:22 -- common/autotest_common.sh@10 -- # set +x 00:11:57.660 ************************************ 00:11:57.660 START TEST nvmf_nvme_cli 00:11:57.660 ************************************ 00:11:57.660 20:43:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:57.660 * Looking for test storage... 00:11:57.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.660 20:43:22 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.660 20:43:22 -- nvmf/common.sh@7 -- # uname -s 00:11:57.660 20:43:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.660 20:43:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.660 20:43:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.660 20:43:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.660 20:43:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.660 20:43:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.660 20:43:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.660 20:43:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.660 20:43:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.660 20:43:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.660 20:43:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:57.660 20:43:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:11:57.660 20:43:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.923 20:43:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.923 20:43:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.923 20:43:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.923 20:43:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.923 20:43:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.923 20:43:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.923 20:43:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.923 20:43:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.923 20:43:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.923 20:43:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.923 20:43:22 -- paths/export.sh@5 -- # export PATH 00:11:57.923 20:43:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.923 20:43:22 -- nvmf/common.sh@47 -- # : 0 00:11:57.923 20:43:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.923 20:43:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.923 20:43:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.923 20:43:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.923 20:43:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.923 20:43:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.923 20:43:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.923 20:43:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.923 20:43:22 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:57.923 20:43:22 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:57.923 20:43:22 -- target/nvme_cli.sh@14 -- # devs=() 00:11:57.923 20:43:22 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:57.923 20:43:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:57.923 20:43:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.923 20:43:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:57.923 20:43:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:57.923 20:43:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:57.923 20:43:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.923 20:43:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.923 20:43:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.923 20:43:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:11:57.923 20:43:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:57.923 20:43:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:57.923 20:43:22 -- common/autotest_common.sh@10 -- # set +x 00:12:04.546 20:43:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:04.546 20:43:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.546 20:43:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.546 20:43:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.546 20:43:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.546 20:43:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.546 20:43:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.546 20:43:28 -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.546 20:43:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.546 20:43:28 -- nvmf/common.sh@296 -- # e810=() 00:12:04.546 20:43:28 -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.546 20:43:28 -- nvmf/common.sh@297 -- # x722=() 00:12:04.546 20:43:28 -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.546 20:43:28 -- nvmf/common.sh@298 -- # mlx=() 00:12:04.546 20:43:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.546 20:43:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.546 20:43:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.546 20:43:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.546 20:43:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.546 20:43:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.546 20:43:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.546 20:43:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.546 20:43:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.546 20:43:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.546 20:43:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.546 20:43:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.546 20:43:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.546 20:43:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:04.546 20:43:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.546 20:43:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.546 20:43:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:04.546 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:04.546 20:43:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.546 20:43:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:04.546 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:04.546 20:43:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.546 20:43:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.546 20:43:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.546 20:43:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:04.546 20:43:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.546 20:43:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:04.546 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:04.546 20:43:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.546 20:43:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.546 20:43:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.546 20:43:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:04.546 20:43:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.546 20:43:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:04.546 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:04.546 20:43:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.546 20:43:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:04.546 20:43:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:04.546 20:43:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:04.546 20:43:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:04.546 20:43:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.546 20:43:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.546 20:43:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.546 20:43:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:04.546 20:43:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.546 20:43:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.546 20:43:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:04.546 20:43:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.546 20:43:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.546 20:43:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:04.546 20:43:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:04.546 20:43:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.546 20:43:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.546 20:43:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.546 20:43:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.546 20:43:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:04.546 20:43:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.809 20:43:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.809 20:43:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.809 20:43:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:04.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:12:04.809 00:12:04.809 --- 10.0.0.2 ping statistics --- 00:12:04.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.809 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:12:04.809 20:43:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:12:04.809 00:12:04.809 --- 10.0.0.1 ping statistics --- 00:12:04.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.809 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:12:04.809 20:43:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.809 20:43:29 -- nvmf/common.sh@411 -- # return 0 00:12:04.809 20:43:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:04.809 20:43:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.809 20:43:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:04.809 20:43:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:04.809 20:43:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.809 20:43:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:04.809 20:43:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:04.809 20:43:29 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:04.809 20:43:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:04.809 20:43:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:04.809 20:43:29 -- common/autotest_common.sh@10 -- # set +x 00:12:04.809 20:43:29 -- nvmf/common.sh@470 -- # nvmfpid=2691153 00:12:04.809 20:43:29 -- nvmf/common.sh@471 -- # waitforlisten 2691153 00:12:04.809 20:43:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.809 20:43:29 -- common/autotest_common.sh@817 -- # '[' -z 2691153 ']' 00:12:04.809 20:43:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.809 20:43:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:04.809 20:43:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.809 20:43:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:04.809 20:43:29 -- common/autotest_common.sh@10 -- # set +x 00:12:04.809 [2024-04-24 20:43:29.370204] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:12:04.809 [2024-04-24 20:43:29.370267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.809 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.069 [2024-04-24 20:43:29.460226] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.069 [2024-04-24 20:43:29.555247] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.069 [2024-04-24 20:43:29.555310] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.069 [2024-04-24 20:43:29.555319] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.069 [2024-04-24 20:43:29.555325] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.069 [2024-04-24 20:43:29.555331] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.069 [2024-04-24 20:43:29.555466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.069 [2024-04-24 20:43:29.555612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.069 [2024-04-24 20:43:29.555784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.069 [2024-04-24 20:43:29.555785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.640 20:43:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:05.640 20:43:30 -- common/autotest_common.sh@850 -- # return 0 00:12:05.640 20:43:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:05.640 20:43:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:05.640 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:12:05.901 20:43:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.901 20:43:30 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.901 20:43:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.901 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:12:05.901 [2024-04-24 20:43:30.293453] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.901 20:43:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.901 20:43:30 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:05.901 20:43:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.901 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:12:05.901 Malloc0 00:12:05.901 20:43:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.901 20:43:30 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:05.901 20:43:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.901 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:12:05.901 Malloc1 00:12:05.901 20:43:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.901 20:43:30 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:05.901 20:43:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.901 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:12:05.901 20:43:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.901 20:43:30 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:05.902 20:43:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.902 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:12:05.902 20:43:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.902 20:43:30 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:05.902 20:43:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.902 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:12:05.902 20:43:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.902 20:43:30 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.902 20:43:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.902 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:12:05.902 [2024-04-24 20:43:30.383215] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.902 20:43:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.902 20:43:30 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:05.902 20:43:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:05.902 20:43:30 -- common/autotest_common.sh@10 -- # set +x 00:12:05.902 20:43:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:05.902 20:43:30 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -a 10.0.0.2 -s 4420 00:12:05.902 00:12:05.902 Discovery Log Number of Records 2, Generation counter 2 00:12:05.902 =====Discovery Log Entry 0====== 00:12:05.902 trtype: tcp 00:12:05.902 adrfam: ipv4 00:12:05.902 subtype: current discovery subsystem 00:12:05.902 treq: not required 00:12:05.902 portid: 0 00:12:05.902 trsvcid: 4420 00:12:05.902 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:05.902 traddr: 10.0.0.2 00:12:05.902 eflags: explicit discovery connections, duplicate discovery information 00:12:05.902 sectype: none 00:12:05.902 =====Discovery Log Entry 1====== 00:12:05.902 trtype: tcp 00:12:05.902 adrfam: ipv4 00:12:05.902 subtype: nvme subsystem 00:12:05.902 treq: not required 00:12:05.902 portid: 0 00:12:05.902 trsvcid: 4420 00:12:05.902 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:05.902 traddr: 10.0.0.2 00:12:05.902 eflags: none 00:12:05.902 sectype: none 00:12:05.902 20:43:30 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:05.902 20:43:30 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:05.902 20:43:30 -- nvmf/common.sh@511 -- # local dev _ 00:12:05.902 20:43:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:05.902 20:43:30 -- nvmf/common.sh@510 -- # nvme list 00:12:05.902 20:43:30 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:05.902 20:43:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:05.902 20:43:30 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:05.902 20:43:30 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:05.902 20:43:30 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:05.902 20:43:30 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.818 20:43:31 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:07.818 20:43:31 -- common/autotest_common.sh@1184 -- # local i=0 00:12:07.818 20:43:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.818 20:43:31 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:12:07.818 20:43:31 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:12:07.818 20:43:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:09.732 20:43:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:09.732 20:43:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:09.732 20:43:33 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.732 20:43:34 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:12:09.732 20:43:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.732 20:43:34 -- common/autotest_common.sh@1194 -- # return 0 00:12:09.732 20:43:34 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:09.732 20:43:34 -- nvmf/common.sh@511 -- # local dev _ 00:12:09.732 20:43:34 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.732 20:43:34 -- nvmf/common.sh@510 -- # nvme list 00:12:09.732 20:43:34 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:09.732 20:43:34 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.732 20:43:34 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:09.732 20:43:34 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.732 20:43:34 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:09.732 20:43:34 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:09.732 20:43:34 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.732 20:43:34 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:09.732 20:43:34 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:09.732 20:43:34 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.732 20:43:34 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:09.732 /dev/nvme0n1 ]] 00:12:09.732 20:43:34 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:09.732 20:43:34 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:09.732 20:43:34 -- nvmf/common.sh@511 -- # local dev _ 00:12:09.732 20:43:34 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.732 20:43:34 -- nvmf/common.sh@510 -- # nvme list 00:12:09.732 20:43:34 -- nvmf/common.sh@514 -- # [[ Node == /dev/nvme* ]] 00:12:09.732 20:43:34 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.732 20:43:34 -- nvmf/common.sh@514 -- # [[ --------------------- == /dev/nvme* ]] 00:12:09.732 20:43:34 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.732 20:43:34 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:09.732 20:43:34 -- nvmf/common.sh@515 -- # echo /dev/nvme0n2 00:12:09.732 20:43:34 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.732 20:43:34 -- nvmf/common.sh@514 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:09.732 20:43:34 -- nvmf/common.sh@515 -- # echo /dev/nvme0n1 00:12:09.732 20:43:34 -- nvmf/common.sh@513 -- # read -r dev _ 00:12:09.732 20:43:34 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:09.732 20:43:34 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.732 20:43:34 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.732 20:43:34 -- common/autotest_common.sh@1205 -- # local i=0 00:12:09.732 20:43:34 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:09.732 20:43:34 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.732 20:43:34 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:09.732 20:43:34 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.732 20:43:34 -- common/autotest_common.sh@1217 -- # return 0 00:12:09.732 20:43:34 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:09.732 20:43:34 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.732 20:43:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:09.732 20:43:34 -- common/autotest_common.sh@10 -- # set +x 00:12:09.732 20:43:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:09.732 20:43:34 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:09.732 20:43:34 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:09.732 20:43:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:09.732 20:43:34 -- nvmf/common.sh@117 -- # sync 00:12:09.732 20:43:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.732 20:43:34 -- nvmf/common.sh@120 -- # set +e 00:12:09.732 20:43:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.732 20:43:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.732 rmmod nvme_tcp 00:12:09.732 rmmod nvme_fabrics 00:12:09.732 rmmod nvme_keyring 00:12:09.732 20:43:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.732 20:43:34 -- nvmf/common.sh@124 -- # set -e 00:12:09.732 20:43:34 -- nvmf/common.sh@125 -- # return 0 00:12:09.733 20:43:34 -- nvmf/common.sh@478 -- # '[' -n 2691153 ']' 00:12:09.733 20:43:34 -- nvmf/common.sh@479 -- # killprocess 2691153 00:12:09.733 20:43:34 -- common/autotest_common.sh@936 -- # '[' -z 2691153 ']' 00:12:09.733 20:43:34 -- common/autotest_common.sh@940 -- # kill -0 2691153 00:12:09.733 20:43:34 -- common/autotest_common.sh@941 -- # uname 00:12:09.733 20:43:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:09.733 20:43:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2691153 00:12:09.733 20:43:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:09.733 20:43:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:09.733 20:43:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2691153' 00:12:09.733 killing process with pid 2691153 00:12:09.733 20:43:34 -- common/autotest_common.sh@955 -- # kill 2691153 00:12:09.733 20:43:34 -- common/autotest_common.sh@960 -- # wait 2691153 00:12:09.994 20:43:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:09.994 20:43:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:09.994 20:43:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:09.994 20:43:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.994 20:43:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.994 20:43:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.994 20:43:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.994 20:43:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:11.965 20:43:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:11.965 00:12:11.965 real 0m14.346s 00:12:11.965 user 0m21.602s 00:12:11.965 sys 0m5.877s 00:12:11.965 20:43:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:11.965 20:43:36 -- common/autotest_common.sh@10 -- # set +x 00:12:11.965 ************************************ 00:12:11.965 END TEST nvmf_nvme_cli 00:12:11.965 ************************************ 00:12:11.965 20:43:36 -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:11.965 20:43:36 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:11.965 20:43:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:11.965 20:43:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:11.965 20:43:36 -- common/autotest_common.sh@10 -- # set +x 00:12:12.227 ************************************ 00:12:12.227 START TEST nvmf_vfio_user 00:12:12.227 ************************************ 00:12:12.227 20:43:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:12.227 * Looking for test storage... 00:12:12.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.227 20:43:36 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.227 20:43:36 -- nvmf/common.sh@7 -- # uname -s 00:12:12.227 20:43:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.227 20:43:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.227 20:43:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.227 20:43:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.227 20:43:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.227 20:43:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.227 20:43:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.227 20:43:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.227 20:43:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.227 20:43:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.227 20:43:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:12:12.227 20:43:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:12:12.227 20:43:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.227 20:43:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.227 20:43:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.227 20:43:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.227 20:43:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.227 20:43:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.227 20:43:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.227 20:43:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.227 20:43:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.227 20:43:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.227 20:43:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.227 20:43:36 -- paths/export.sh@5 -- # export PATH 00:12:12.227 20:43:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.227 20:43:36 -- nvmf/common.sh@47 -- # : 0 00:12:12.227 20:43:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.227 20:43:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.227 20:43:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.227 20:43:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.227 20:43:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.227 20:43:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.227 20:43:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.227 20:43:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.227 20:43:36 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:12.227 20:43:36 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:12.227 20:43:36 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:12.227 20:43:36 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:12.227 20:43:36 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:12.227 20:43:36 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:12.227 20:43:36 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:12.227 20:43:36 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:12.227 20:43:36 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:12.489 20:43:36 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:12.489 20:43:36 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2692940 00:12:12.489 20:43:36 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2692940' 00:12:12.489 Process pid: 2692940 00:12:12.489 20:43:36 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:12.489 20:43:36 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2692940 00:12:12.489 20:43:36 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:12.489 20:43:36 -- common/autotest_common.sh@817 -- # '[' -z 2692940 ']' 00:12:12.489 20:43:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.489 20:43:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:12.489 20:43:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.489 20:43:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:12.489 20:43:36 -- common/autotest_common.sh@10 -- # set +x 00:12:12.489 [2024-04-24 20:43:36.931139] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:12:12.489 [2024-04-24 20:43:36.931204] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.489 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.489 [2024-04-24 20:43:37.013246] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:12.489 [2024-04-24 20:43:37.083668] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.489 [2024-04-24 20:43:37.083710] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.489 [2024-04-24 20:43:37.083719] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.489 [2024-04-24 20:43:37.083732] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.489 [2024-04-24 20:43:37.083739] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.489 [2024-04-24 20:43:37.083793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.489 [2024-04-24 20:43:37.083942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.489 [2024-04-24 20:43:37.084082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.489 [2024-04-24 20:43:37.084083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:13.583 20:43:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:13.583 20:43:37 -- common/autotest_common.sh@850 -- # return 0 00:12:13.583 20:43:37 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:14.525 20:43:38 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:14.526 20:43:39 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:14.526 20:43:39 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:14.526 20:43:39 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:14.526 20:43:39 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:14.526 20:43:39 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:14.786 Malloc1 00:12:14.786 20:43:39 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:15.048 20:43:39 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:15.309 20:43:39 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:15.309 20:43:39 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:15.309 20:43:39 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:15.309 20:43:39 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:15.569 Malloc2 00:12:15.569 20:43:40 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:15.829 20:43:40 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:16.089 20:43:40 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:16.354 20:43:40 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:16.354 20:43:40 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:16.354 20:43:40 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:16.354 20:43:40 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:16.354 20:43:40 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:16.354 20:43:40 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:16.354 [2024-04-24 20:43:40.816612] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:12:16.354 [2024-04-24 20:43:40.816661] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2693637 ] 00:12:16.354 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.354 [2024-04-24 20:43:40.849344] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:16.354 [2024-04-24 20:43:40.858111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:16.354 [2024-04-24 20:43:40.858130] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb4c44ba000 00:12:16.354 [2024-04-24 20:43:40.859103] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:16.354 [2024-04-24 20:43:40.860105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:16.354 [2024-04-24 20:43:40.861104] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:16.354 [2024-04-24 20:43:40.862109] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:16.354 [2024-04-24 20:43:40.863120] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:16.354 [2024-04-24 20:43:40.864131] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:16.354 [2024-04-24 20:43:40.865134] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:16.354 [2024-04-24 20:43:40.866132] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:16.354 [2024-04-24 20:43:40.867140] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:16.354 [2024-04-24 20:43:40.867153] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb4c44af000 00:12:16.354 [2024-04-24 20:43:40.868481] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:16.354 [2024-04-24 20:43:40.885429] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:16.354 [2024-04-24 20:43:40.885452] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:16.354 [2024-04-24 20:43:40.890294] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:16.354 [2024-04-24 20:43:40.890342] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:16.354 [2024-04-24 20:43:40.890428] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:16.354 [2024-04-24 20:43:40.890448] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:16.354 [2024-04-24 20:43:40.890453] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:16.354 [2024-04-24 20:43:40.891290] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:16.354 [2024-04-24 20:43:40.891300] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:16.354 [2024-04-24 20:43:40.891307] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:16.354 [2024-04-24 20:43:40.892290] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:16.354 [2024-04-24 20:43:40.892298] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:16.354 [2024-04-24 20:43:40.892306] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:16.354 [2024-04-24 20:43:40.893298] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:16.354 [2024-04-24 20:43:40.893306] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:16.354 [2024-04-24 20:43:40.894309] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:16.354 [2024-04-24 20:43:40.894317] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:16.354 [2024-04-24 20:43:40.894322] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:16.354 [2024-04-24 20:43:40.894329] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:16.354 [2024-04-24 20:43:40.894434] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:16.354 [2024-04-24 20:43:40.894439] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:16.354 [2024-04-24 20:43:40.894444] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:16.355 [2024-04-24 20:43:40.895309] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:16.355 [2024-04-24 20:43:40.896317] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:16.355 [2024-04-24 20:43:40.897325] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:16.355 [2024-04-24 20:43:40.898321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:16.355 [2024-04-24 20:43:40.898389] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:16.355 [2024-04-24 20:43:40.899332] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:16.355 [2024-04-24 20:43:40.899339] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:16.355 [2024-04-24 20:43:40.899344] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899365] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:16.355 [2024-04-24 20:43:40.899377] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899397] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:16.355 [2024-04-24 20:43:40.899402] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:16.355 [2024-04-24 20:43:40.899417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:16.355 [2024-04-24 20:43:40.899453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:16.355 [2024-04-24 20:43:40.899463] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:16.355 [2024-04-24 20:43:40.899468] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:16.355 [2024-04-24 20:43:40.899472] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:16.355 [2024-04-24 20:43:40.899477] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:16.355 [2024-04-24 20:43:40.899482] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:16.355 [2024-04-24 20:43:40.899486] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:16.355 [2024-04-24 20:43:40.899491] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899498] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899508] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:16.355 [2024-04-24 20:43:40.899521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:16.355 [2024-04-24 20:43:40.899534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.355 [2024-04-24 20:43:40.899543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.355 [2024-04-24 20:43:40.899551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.355 [2024-04-24 20:43:40.899559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.355 [2024-04-24 20:43:40.899563] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899572] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899581] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:16.355 [2024-04-24 20:43:40.899590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:16.355 [2024-04-24 20:43:40.899595] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:16.355 [2024-04-24 20:43:40.899600] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899611] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899618] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899627] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:16.355 [2024-04-24 20:43:40.899638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:16.355 [2024-04-24 20:43:40.899687] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899695] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899703] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:16.355 [2024-04-24 20:43:40.899707] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:16.355 [2024-04-24 20:43:40.899713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:16.355 [2024-04-24 20:43:40.899735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:16.355 [2024-04-24 20:43:40.899745] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:16.355 [2024-04-24 20:43:40.899753] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899762] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899769] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:16.355 [2024-04-24 20:43:40.899773] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:16.355 [2024-04-24 20:43:40.899779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:16.355 [2024-04-24 20:43:40.899801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:16.355 [2024-04-24 20:43:40.899813] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899821] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:16.355 [2024-04-24 20:43:40.899828] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:16.355 [2024-04-24 20:43:40.899832] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:16.355 [2024-04-24 20:43:40.899838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:16.355 [2024-04-24 20:43:40.899851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:16.356 [2024-04-24 20:43:40.899858] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:16.356 [2024-04-24 20:43:40.899865] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:16.356 [2024-04-24 20:43:40.899872] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:16.356 [2024-04-24 20:43:40.899878] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:16.356 [2024-04-24 20:43:40.899885] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:16.356 [2024-04-24 20:43:40.899890] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:16.356 [2024-04-24 20:43:40.899895] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:16.356 [2024-04-24 20:43:40.899900] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:16.356 [2024-04-24 20:43:40.899917] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:16.356 [2024-04-24 20:43:40.899929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:16.356 [2024-04-24 20:43:40.899940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:16.356 [2024-04-24 20:43:40.899951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:16.356 [2024-04-24 20:43:40.899961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:16.356 [2024-04-24 20:43:40.899975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:16.356 [2024-04-24 20:43:40.899985] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:16.356 [2024-04-24 20:43:40.899998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:16.356 [2024-04-24 20:43:40.900009] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:16.356 [2024-04-24 20:43:40.900013] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:16.356 [2024-04-24 20:43:40.900016] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:16.356 [2024-04-24 20:43:40.900020] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:16.356 [2024-04-24 20:43:40.900026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:16.356 [2024-04-24 20:43:40.900033] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:16.356 [2024-04-24 20:43:40.900037] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:16.356 [2024-04-24 20:43:40.900043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:16.356 [2024-04-24 20:43:40.900050] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:16.356 [2024-04-24 20:43:40.900054] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:16.356 [2024-04-24 20:43:40.900060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:16.356 [2024-04-24 20:43:40.900067] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:16.356 [2024-04-24 20:43:40.900071] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:16.356 [2024-04-24 20:43:40.900077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:16.356 [2024-04-24 20:43:40.900084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:16.356 [2024-04-24 20:43:40.900098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:16.356 [2024-04-24 20:43:40.900107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:16.356 [2024-04-24 20:43:40.900114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:16.356 ===================================================== 00:12:16.356 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:16.356 ===================================================== 00:12:16.356 Controller Capabilities/Features 00:12:16.356 ================================ 00:12:16.356 Vendor ID: 4e58 00:12:16.356 Subsystem Vendor ID: 4e58 00:12:16.356 Serial Number: SPDK1 00:12:16.356 Model Number: SPDK bdev Controller 00:12:16.356 Firmware Version: 24.05 00:12:16.356 Recommended Arb Burst: 6 00:12:16.356 IEEE OUI Identifier: 8d 6b 50 00:12:16.356 Multi-path I/O 00:12:16.356 May have multiple subsystem ports: Yes 00:12:16.356 May have multiple controllers: Yes 00:12:16.356 Associated with SR-IOV VF: No 00:12:16.356 Max Data Transfer Size: 131072 00:12:16.356 Max Number of Namespaces: 32 00:12:16.356 Max Number of I/O Queues: 127 00:12:16.356 NVMe Specification Version (VS): 1.3 00:12:16.356 NVMe Specification Version (Identify): 1.3 00:12:16.356 Maximum Queue Entries: 256 00:12:16.356 Contiguous Queues Required: Yes 00:12:16.356 Arbitration Mechanisms Supported 00:12:16.356 Weighted Round Robin: Not Supported 00:12:16.356 Vendor Specific: Not Supported 00:12:16.356 Reset Timeout: 15000 ms 00:12:16.356 Doorbell Stride: 4 bytes 00:12:16.356 NVM Subsystem Reset: Not Supported 00:12:16.356 Command Sets Supported 00:12:16.356 NVM Command Set: Supported 00:12:16.356 Boot Partition: Not Supported 00:12:16.356 Memory Page Size Minimum: 4096 bytes 00:12:16.356 Memory Page Size Maximum: 4096 bytes 00:12:16.356 Persistent Memory Region: Not Supported 00:12:16.356 Optional Asynchronous Events Supported 00:12:16.356 Namespace Attribute Notices: Supported 00:12:16.356 Firmware Activation Notices: Not Supported 00:12:16.356 ANA Change Notices: Not Supported 00:12:16.356 PLE Aggregate Log Change Notices: Not Supported 00:12:16.356 LBA Status Info Alert Notices: Not Supported 00:12:16.356 EGE Aggregate Log Change Notices: Not Supported 00:12:16.356 Normal NVM Subsystem Shutdown event: Not Supported 00:12:16.356 Zone Descriptor Change Notices: Not Supported 00:12:16.356 Discovery Log Change Notices: Not Supported 00:12:16.356 Controller Attributes 00:12:16.356 128-bit Host Identifier: Supported 00:12:16.356 Non-Operational Permissive Mode: Not Supported 00:12:16.356 NVM Sets: Not Supported 00:12:16.356 Read Recovery Levels: Not Supported 00:12:16.356 Endurance Groups: Not Supported 00:12:16.356 Predictable Latency Mode: Not Supported 00:12:16.356 Traffic Based Keep ALive: Not Supported 00:12:16.356 Namespace Granularity: Not Supported 00:12:16.356 SQ Associations: Not Supported 00:12:16.357 UUID List: Not Supported 00:12:16.357 Multi-Domain Subsystem: Not Supported 00:12:16.357 Fixed Capacity Management: Not Supported 00:12:16.357 Variable Capacity Management: Not Supported 00:12:16.357 Delete Endurance Group: Not Supported 00:12:16.357 Delete NVM Set: Not Supported 00:12:16.357 Extended LBA Formats Supported: Not Supported 00:12:16.357 Flexible Data Placement Supported: Not Supported 00:12:16.357 00:12:16.357 Controller Memory Buffer Support 00:12:16.357 ================================ 00:12:16.357 Supported: No 00:12:16.357 00:12:16.357 Persistent Memory Region Support 00:12:16.357 ================================ 00:12:16.357 Supported: No 00:12:16.357 00:12:16.357 Admin Command Set Attributes 00:12:16.357 ============================ 00:12:16.357 Security Send/Receive: Not Supported 00:12:16.357 Format NVM: Not Supported 00:12:16.357 Firmware Activate/Download: Not Supported 00:12:16.357 Namespace Management: Not Supported 00:12:16.357 Device Self-Test: Not Supported 00:12:16.357 Directives: Not Supported 00:12:16.357 NVMe-MI: Not Supported 00:12:16.357 Virtualization Management: Not Supported 00:12:16.357 Doorbell Buffer Config: Not Supported 00:12:16.357 Get LBA Status Capability: Not Supported 00:12:16.357 Command & Feature Lockdown Capability: Not Supported 00:12:16.357 Abort Command Limit: 4 00:12:16.357 Async Event Request Limit: 4 00:12:16.357 Number of Firmware Slots: N/A 00:12:16.357 Firmware Slot 1 Read-Only: N/A 00:12:16.357 Firmware Activation Without Reset: N/A 00:12:16.357 Multiple Update Detection Support: N/A 00:12:16.357 Firmware Update Granularity: No Information Provided 00:12:16.357 Per-Namespace SMART Log: No 00:12:16.357 Asymmetric Namespace Access Log Page: Not Supported 00:12:16.357 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:16.357 Command Effects Log Page: Supported 00:12:16.357 Get Log Page Extended Data: Supported 00:12:16.357 Telemetry Log Pages: Not Supported 00:12:16.357 Persistent Event Log Pages: Not Supported 00:12:16.357 Supported Log Pages Log Page: May Support 00:12:16.357 Commands Supported & Effects Log Page: Not Supported 00:12:16.357 Feature Identifiers & Effects Log Page:May Support 00:12:16.357 NVMe-MI Commands & Effects Log Page: May Support 00:12:16.357 Data Area 4 for Telemetry Log: Not Supported 00:12:16.357 Error Log Page Entries Supported: 128 00:12:16.357 Keep Alive: Supported 00:12:16.357 Keep Alive Granularity: 10000 ms 00:12:16.357 00:12:16.357 NVM Command Set Attributes 00:12:16.357 ========================== 00:12:16.357 Submission Queue Entry Size 00:12:16.357 Max: 64 00:12:16.357 Min: 64 00:12:16.357 Completion Queue Entry Size 00:12:16.357 Max: 16 00:12:16.357 Min: 16 00:12:16.357 Number of Namespaces: 32 00:12:16.357 Compare Command: Supported 00:12:16.357 Write Uncorrectable Command: Not Supported 00:12:16.357 Dataset Management Command: Supported 00:12:16.357 Write Zeroes Command: Supported 00:12:16.357 Set Features Save Field: Not Supported 00:12:16.357 Reservations: Not Supported 00:12:16.357 Timestamp: Not Supported 00:12:16.357 Copy: Supported 00:12:16.357 Volatile Write Cache: Present 00:12:16.357 Atomic Write Unit (Normal): 1 00:12:16.357 Atomic Write Unit (PFail): 1 00:12:16.357 Atomic Compare & Write Unit: 1 00:12:16.357 Fused Compare & Write: Supported 00:12:16.357 Scatter-Gather List 00:12:16.357 SGL Command Set: Supported (Dword aligned) 00:12:16.357 SGL Keyed: Not Supported 00:12:16.357 SGL Bit Bucket Descriptor: Not Supported 00:12:16.357 SGL Metadata Pointer: Not Supported 00:12:16.357 Oversized SGL: Not Supported 00:12:16.357 SGL Metadata Address: Not Supported 00:12:16.357 SGL Offset: Not Supported 00:12:16.357 Transport SGL Data Block: Not Supported 00:12:16.357 Replay Protected Memory Block: Not Supported 00:12:16.357 00:12:16.357 Firmware Slot Information 00:12:16.357 ========================= 00:12:16.357 Active slot: 1 00:12:16.357 Slot 1 Firmware Revision: 24.05 00:12:16.357 00:12:16.357 00:12:16.357 Commands Supported and Effects 00:12:16.357 ============================== 00:12:16.357 Admin Commands 00:12:16.357 -------------- 00:12:16.357 Get Log Page (02h): Supported 00:12:16.357 Identify (06h): Supported 00:12:16.357 Abort (08h): Supported 00:12:16.357 Set Features (09h): Supported 00:12:16.357 Get Features (0Ah): Supported 00:12:16.357 Asynchronous Event Request (0Ch): Supported 00:12:16.357 Keep Alive (18h): Supported 00:12:16.357 I/O Commands 00:12:16.357 ------------ 00:12:16.357 Flush (00h): Supported LBA-Change 00:12:16.357 Write (01h): Supported LBA-Change 00:12:16.357 Read (02h): Supported 00:12:16.357 Compare (05h): Supported 00:12:16.357 Write Zeroes (08h): Supported LBA-Change 00:12:16.357 Dataset Management (09h): Supported LBA-Change 00:12:16.357 Copy (19h): Supported LBA-Change 00:12:16.357 Unknown (79h): Supported LBA-Change 00:12:16.357 Unknown (7Ah): Supported 00:12:16.357 00:12:16.357 Error Log 00:12:16.357 ========= 00:12:16.357 00:12:16.357 Arbitration 00:12:16.357 =========== 00:12:16.357 Arbitration Burst: 1 00:12:16.357 00:12:16.357 Power Management 00:12:16.357 ================ 00:12:16.357 Number of Power States: 1 00:12:16.357 Current Power State: Power State #0 00:12:16.358 Power State #0: 00:12:16.358 Max Power: 0.00 W 00:12:16.358 Non-Operational State: Operational 00:12:16.358 Entry Latency: Not Reported 00:12:16.358 Exit Latency: Not Reported 00:12:16.358 Relative Read Throughput: 0 00:12:16.358 Relative Read Latency: 0 00:12:16.358 Relative Write Throughput: 0 00:12:16.358 Relative Write Latency: 0 00:12:16.358 Idle Power: Not Reported 00:12:16.358 Active Power: Not Reported 00:12:16.358 Non-Operational Permissive Mode: Not Supported 00:12:16.358 00:12:16.358 Health Information 00:12:16.358 ================== 00:12:16.358 Critical Warnings: 00:12:16.358 Available Spare Space: OK 00:12:16.358 Temperature: OK 00:12:16.358 Device Reliability: OK 00:12:16.358 Read Only: No 00:12:16.358 Volatile Memory Backup: OK 00:12:16.358 Current Temperature: 0 Kelvin (-2[2024-04-24 20:43:40.900217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:16.358 [2024-04-24 20:43:40.900230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:16.358 [2024-04-24 20:43:40.900255] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:16.358 [2024-04-24 20:43:40.900264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.358 [2024-04-24 20:43:40.900271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.358 [2024-04-24 20:43:40.900277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.358 [2024-04-24 20:43:40.900283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.358 [2024-04-24 20:43:40.903732] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:16.358 [2024-04-24 20:43:40.903744] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:16.358 [2024-04-24 20:43:40.904355] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:16.358 [2024-04-24 20:43:40.904405] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:16.358 [2024-04-24 20:43:40.904411] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:16.358 [2024-04-24 20:43:40.905362] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:16.358 [2024-04-24 20:43:40.905372] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:16.358 [2024-04-24 20:43:40.905439] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:16.358 [2024-04-24 20:43:40.907396] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:16.358 73 Celsius) 00:12:16.358 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:16.358 Available Spare: 0% 00:12:16.358 Available Spare Threshold: 0% 00:12:16.358 Life Percentage Used: 0% 00:12:16.358 Data Units Read: 0 00:12:16.358 Data Units Written: 0 00:12:16.358 Host Read Commands: 0 00:12:16.358 Host Write Commands: 0 00:12:16.358 Controller Busy Time: 0 minutes 00:12:16.358 Power Cycles: 0 00:12:16.358 Power On Hours: 0 hours 00:12:16.358 Unsafe Shutdowns: 0 00:12:16.358 Unrecoverable Media Errors: 0 00:12:16.358 Lifetime Error Log Entries: 0 00:12:16.358 Warning Temperature Time: 0 minutes 00:12:16.358 Critical Temperature Time: 0 minutes 00:12:16.358 00:12:16.358 Number of Queues 00:12:16.358 ================ 00:12:16.358 Number of I/O Submission Queues: 127 00:12:16.358 Number of I/O Completion Queues: 127 00:12:16.358 00:12:16.358 Active Namespaces 00:12:16.358 ================= 00:12:16.358 Namespace ID:1 00:12:16.358 Error Recovery Timeout: Unlimited 00:12:16.358 Command Set Identifier: NVM (00h) 00:12:16.358 Deallocate: Supported 00:12:16.358 Deallocated/Unwritten Error: Not Supported 00:12:16.358 Deallocated Read Value: Unknown 00:12:16.358 Deallocate in Write Zeroes: Not Supported 00:12:16.358 Deallocated Guard Field: 0xFFFF 00:12:16.358 Flush: Supported 00:12:16.358 Reservation: Supported 00:12:16.358 Namespace Sharing Capabilities: Multiple Controllers 00:12:16.358 Size (in LBAs): 131072 (0GiB) 00:12:16.358 Capacity (in LBAs): 131072 (0GiB) 00:12:16.358 Utilization (in LBAs): 131072 (0GiB) 00:12:16.358 NGUID: 949D1DC6E16F4F4DBE5E01457F25AB63 00:12:16.358 UUID: 949d1dc6-e16f-4f4d-be5e-01457f25ab63 00:12:16.358 Thin Provisioning: Not Supported 00:12:16.358 Per-NS Atomic Units: Yes 00:12:16.358 Atomic Boundary Size (Normal): 0 00:12:16.358 Atomic Boundary Size (PFail): 0 00:12:16.358 Atomic Boundary Offset: 0 00:12:16.358 Maximum Single Source Range Length: 65535 00:12:16.358 Maximum Copy Length: 65535 00:12:16.358 Maximum Source Range Count: 1 00:12:16.358 NGUID/EUI64 Never Reused: No 00:12:16.358 Namespace Write Protected: No 00:12:16.358 Number of LBA Formats: 1 00:12:16.358 Current LBA Format: LBA Format #00 00:12:16.358 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:16.358 00:12:16.358 20:43:40 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:16.358 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.620 [2024-04-24 20:43:41.108417] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:21.911 [2024-04-24 20:43:46.127106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:21.911 Initializing NVMe Controllers 00:12:21.911 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:21.911 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:21.911 Initialization complete. Launching workers. 00:12:21.911 ======================================================== 00:12:21.911 Latency(us) 00:12:21.911 Device Information : IOPS MiB/s Average min max 00:12:21.911 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34901.56 136.33 3666.67 1213.43 7871.32 00:12:21.911 ======================================================== 00:12:21.911 Total : 34901.56 136.33 3666.67 1213.43 7871.32 00:12:21.911 00:12:21.911 20:43:46 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:21.911 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.911 [2024-04-24 20:43:46.333096] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:27.196 [2024-04-24 20:43:51.366569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:27.196 Initializing NVMe Controllers 00:12:27.196 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:27.196 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:27.196 Initialization complete. Launching workers. 00:12:27.196 ======================================================== 00:12:27.196 Latency(us) 00:12:27.196 Device Information : IOPS MiB/s Average min max 00:12:27.196 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.31 6980.27 8981.32 00:12:27.196 ======================================================== 00:12:27.196 Total : 16051.20 62.70 7980.31 6980.27 8981.32 00:12:27.196 00:12:27.196 20:43:51 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:27.196 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.196 [2024-04-24 20:43:51.592590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:32.523 [2024-04-24 20:43:56.691029] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:32.523 Initializing NVMe Controllers 00:12:32.523 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:32.523 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:32.523 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:32.523 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:32.523 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:32.523 Initialization complete. Launching workers. 00:12:32.523 Starting thread on core 2 00:12:32.523 Starting thread on core 3 00:12:32.523 Starting thread on core 1 00:12:32.523 20:43:56 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:32.523 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.523 [2024-04-24 20:43:56.963104] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:35.823 [2024-04-24 20:44:00.035557] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:35.823 Initializing NVMe Controllers 00:12:35.823 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.823 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.823 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:35.823 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:35.823 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:35.823 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:35.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:35.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:35.823 Initialization complete. Launching workers. 00:12:35.823 Starting thread on core 1 with urgent priority queue 00:12:35.823 Starting thread on core 2 with urgent priority queue 00:12:35.823 Starting thread on core 3 with urgent priority queue 00:12:35.823 Starting thread on core 0 with urgent priority queue 00:12:35.823 SPDK bdev Controller (SPDK1 ) core 0: 8679.67 IO/s 11.52 secs/100000 ios 00:12:35.823 SPDK bdev Controller (SPDK1 ) core 1: 13763.67 IO/s 7.27 secs/100000 ios 00:12:35.823 SPDK bdev Controller (SPDK1 ) core 2: 8359.33 IO/s 11.96 secs/100000 ios 00:12:35.823 SPDK bdev Controller (SPDK1 ) core 3: 11594.00 IO/s 8.63 secs/100000 ios 00:12:35.823 ======================================================== 00:12:35.823 00:12:35.823 20:44:00 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:35.823 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.823 [2024-04-24 20:44:00.297212] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:35.823 [2024-04-24 20:44:00.331448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:35.823 Initializing NVMe Controllers 00:12:35.823 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.823 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:35.823 Namespace ID: 1 size: 0GB 00:12:35.823 Initialization complete. 00:12:35.823 INFO: using host memory buffer for IO 00:12:35.823 Hello world! 00:12:35.823 20:44:00 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:35.823 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.084 [2024-04-24 20:44:00.594194] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:37.026 Initializing NVMe Controllers 00:12:37.026 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:37.026 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:37.026 Initialization complete. Launching workers. 00:12:37.026 submit (in ns) avg, min, max = 8317.6, 3886.7, 4000854.2 00:12:37.026 complete (in ns) avg, min, max = 16952.9, 2381.7, 5992472.5 00:12:37.026 00:12:37.026 Submit histogram 00:12:37.026 ================ 00:12:37.026 Range in us Cumulative Count 00:12:37.026 3.867 - 3.893: 0.0561% ( 11) 00:12:37.026 3.893 - 3.920: 3.4049% ( 657) 00:12:37.026 3.920 - 3.947: 11.4226% ( 1573) 00:12:37.026 3.947 - 3.973: 22.5954% ( 2192) 00:12:37.026 3.973 - 4.000: 34.1353% ( 2264) 00:12:37.026 4.000 - 4.027: 45.4916% ( 2228) 00:12:37.026 4.027 - 4.053: 60.7982% ( 3003) 00:12:37.026 4.053 - 4.080: 75.3759% ( 2860) 00:12:37.026 4.080 - 4.107: 87.5070% ( 2380) 00:12:37.026 4.107 - 4.133: 94.2403% ( 1321) 00:12:37.026 4.133 - 4.160: 97.4260% ( 625) 00:12:37.026 4.160 - 4.187: 98.7206% ( 254) 00:12:37.026 4.187 - 4.213: 99.2609% ( 106) 00:12:37.026 4.213 - 4.240: 99.3985% ( 27) 00:12:37.026 4.240 - 4.267: 99.4240% ( 5) 00:12:37.026 4.267 - 4.293: 99.4546% ( 6) 00:12:37.026 4.293 - 4.320: 99.4597% ( 1) 00:12:37.026 4.400 - 4.427: 99.4648% ( 1) 00:12:37.026 4.587 - 4.613: 99.4699% ( 1) 00:12:37.026 4.720 - 4.747: 99.4801% ( 2) 00:12:37.026 4.747 - 4.773: 99.4852% ( 1) 00:12:37.026 4.853 - 4.880: 99.4903% ( 1) 00:12:37.026 4.880 - 4.907: 99.5005% ( 2) 00:12:37.026 4.933 - 4.960: 99.5107% ( 2) 00:12:37.026 5.413 - 5.440: 99.5158% ( 1) 00:12:37.026 5.467 - 5.493: 99.5209% ( 1) 00:12:37.026 5.493 - 5.520: 99.5260% ( 1) 00:12:37.026 5.707 - 5.733: 99.5311% ( 1) 00:12:37.026 5.760 - 5.787: 99.5362% ( 1) 00:12:37.026 5.787 - 5.813: 99.5413% ( 1) 00:12:37.026 5.813 - 5.840: 99.5515% ( 2) 00:12:37.026 5.867 - 5.893: 99.5616% ( 2) 00:12:37.026 5.893 - 5.920: 99.5718% ( 2) 00:12:37.026 5.947 - 5.973: 99.5820% ( 2) 00:12:37.026 5.973 - 6.000: 99.5871% ( 1) 00:12:37.026 6.000 - 6.027: 99.6024% ( 3) 00:12:37.026 6.027 - 6.053: 99.6075% ( 1) 00:12:37.026 6.053 - 6.080: 99.6126% ( 1) 00:12:37.026 6.107 - 6.133: 99.6177% ( 1) 00:12:37.026 6.133 - 6.160: 99.6279% ( 2) 00:12:37.026 6.160 - 6.187: 99.6330% ( 1) 00:12:37.026 6.187 - 6.213: 99.6381% ( 1) 00:12:37.026 6.240 - 6.267: 99.6432% ( 1) 00:12:37.026 6.320 - 6.347: 99.6483% ( 1) 00:12:37.026 6.347 - 6.373: 99.6534% ( 1) 00:12:37.026 6.373 - 6.400: 99.6585% ( 1) 00:12:37.026 6.427 - 6.453: 99.6636% ( 1) 00:12:37.026 6.453 - 6.480: 99.6738% ( 2) 00:12:37.026 6.480 - 6.507: 99.6840% ( 2) 00:12:37.026 6.507 - 6.533: 99.6891% ( 1) 00:12:37.027 6.587 - 6.613: 99.6993% ( 2) 00:12:37.027 6.613 - 6.640: 99.7095% ( 2) 00:12:37.027 6.640 - 6.667: 99.7146% ( 1) 00:12:37.027 6.693 - 6.720: 99.7248% ( 2) 00:12:37.027 6.720 - 6.747: 99.7299% ( 1) 00:12:37.027 6.747 - 6.773: 99.7350% ( 1) 00:12:37.027 6.773 - 6.800: 99.7451% ( 2) 00:12:37.027 6.827 - 6.880: 99.7502% ( 1) 00:12:37.027 6.880 - 6.933: 99.7655% ( 3) 00:12:37.027 6.933 - 6.987: 99.7706% ( 1) 00:12:37.027 7.147 - 7.200: 99.7757% ( 1) 00:12:37.027 7.200 - 7.253: 99.7910% ( 3) 00:12:37.027 7.253 - 7.307: 99.7961% ( 1) 00:12:37.027 7.360 - 7.413: 99.8114% ( 3) 00:12:37.027 7.413 - 7.467: 99.8216% ( 2) 00:12:37.027 7.520 - 7.573: 99.8318% ( 2) 00:12:37.027 7.573 - 7.627: 99.8420% ( 2) 00:12:37.027 7.680 - 7.733: 99.8471% ( 1) 00:12:37.027 7.733 - 7.787: 99.8624% ( 3) 00:12:37.027 7.893 - 7.947: 99.8675% ( 1) 00:12:37.027 8.213 - 8.267: 99.8726% ( 1) 00:12:37.027 8.267 - 8.320: 99.8777% ( 1) 00:12:37.027 9.013 - 9.067: 99.8828% ( 1) 00:12:37.027 10.720 - 10.773: 99.8879% ( 1) 00:12:37.027 11.573 - 11.627: 99.8930% ( 1) 00:12:37.027 3986.773 - 4014.080: 100.0000% ( 21) 00:12:37.027 00:12:37.027 Complete histogram 00:12:37.027 ================== 00:12:37.027 Range in us Cumulative Count 00:12:37.027 2.373 - 2.387: 0.0051% ( 1) 00:12:37.027 2.387 - [2024-04-24 20:44:01.616362] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:37.027 2.400: 0.0255% ( 4) 00:12:37.027 2.400 - 2.413: 1.1723% ( 225) 00:12:37.027 2.413 - 2.427: 1.3456% ( 34) 00:12:37.027 2.427 - 2.440: 1.5648% ( 43) 00:12:37.027 2.440 - 2.453: 1.6871% ( 24) 00:12:37.027 2.453 - 2.467: 1.7126% ( 5) 00:12:37.027 2.467 - 2.480: 44.1460% ( 8325) 00:12:37.027 2.480 - 2.493: 61.4251% ( 3390) 00:12:37.027 2.493 - 2.507: 71.0689% ( 1892) 00:12:37.027 2.507 - 2.520: 78.7145% ( 1500) 00:12:37.027 2.520 - 2.533: 81.5791% ( 562) 00:12:37.027 2.533 - 2.547: 84.9330% ( 658) 00:12:37.027 2.547 - 2.560: 90.8558% ( 1162) 00:12:37.027 2.560 - 2.573: 94.9946% ( 812) 00:12:37.027 2.573 - 2.587: 97.2272% ( 438) 00:12:37.027 2.587 - 2.600: 98.6085% ( 271) 00:12:37.027 2.600 - 2.613: 99.1590% ( 108) 00:12:37.027 2.613 - 2.627: 99.3221% ( 32) 00:12:37.027 2.627 - 2.640: 99.3425% ( 4) 00:12:37.027 2.640 - 2.653: 99.3527% ( 2) 00:12:37.027 2.653 - 2.667: 99.3578% ( 1) 00:12:37.027 4.320 - 4.347: 99.3629% ( 1) 00:12:37.027 4.347 - 4.373: 99.3680% ( 1) 00:12:37.027 4.400 - 4.427: 99.3731% ( 1) 00:12:37.027 4.453 - 4.480: 99.3782% ( 1) 00:12:37.027 4.480 - 4.507: 99.3833% ( 1) 00:12:37.027 4.507 - 4.533: 99.3934% ( 2) 00:12:37.027 4.560 - 4.587: 99.3985% ( 1) 00:12:37.027 4.800 - 4.827: 99.4036% ( 1) 00:12:37.027 4.853 - 4.880: 99.4189% ( 3) 00:12:37.027 4.987 - 5.013: 99.4240% ( 1) 00:12:37.027 5.013 - 5.040: 99.4291% ( 1) 00:12:37.027 5.067 - 5.093: 99.4342% ( 1) 00:12:37.027 5.093 - 5.120: 99.4444% ( 2) 00:12:37.027 5.120 - 5.147: 99.4495% ( 1) 00:12:37.027 5.200 - 5.227: 99.4546% ( 1) 00:12:37.027 5.253 - 5.280: 99.4597% ( 1) 00:12:37.027 5.280 - 5.307: 99.4648% ( 1) 00:12:37.027 5.333 - 5.360: 99.4699% ( 1) 00:12:37.027 5.387 - 5.413: 99.4750% ( 1) 00:12:37.027 5.413 - 5.440: 99.4801% ( 1) 00:12:37.027 5.493 - 5.520: 99.4852% ( 1) 00:12:37.027 5.520 - 5.547: 99.4903% ( 1) 00:12:37.027 5.547 - 5.573: 99.4954% ( 1) 00:12:37.027 5.573 - 5.600: 99.5005% ( 1) 00:12:37.027 5.627 - 5.653: 99.5107% ( 2) 00:12:37.027 5.653 - 5.680: 99.5209% ( 2) 00:12:37.027 5.813 - 5.840: 99.5311% ( 2) 00:12:37.027 5.867 - 5.893: 99.5362% ( 1) 00:12:37.027 5.893 - 5.920: 99.5413% ( 1) 00:12:37.027 6.053 - 6.080: 99.5464% ( 1) 00:12:37.027 6.160 - 6.187: 99.5515% ( 1) 00:12:37.027 6.213 - 6.240: 99.5566% ( 1) 00:12:37.027 6.240 - 6.267: 99.5616% ( 1) 00:12:37.027 6.320 - 6.347: 99.5667% ( 1) 00:12:37.027 6.507 - 6.533: 99.5769% ( 2) 00:12:37.027 6.533 - 6.560: 99.5820% ( 1) 00:12:37.027 6.640 - 6.667: 99.5871% ( 1) 00:12:37.027 6.800 - 6.827: 99.5922% ( 1) 00:12:37.027 7.093 - 7.147: 99.5973% ( 1) 00:12:37.027 7.840 - 7.893: 99.6024% ( 1) 00:12:37.027 10.400 - 10.453: 99.6075% ( 1) 00:12:37.027 11.360 - 11.413: 99.6126% ( 1) 00:12:37.027 13.333 - 13.387: 99.6177% ( 1) 00:12:37.027 13.493 - 13.547: 99.6228% ( 1) 00:12:37.027 13.867 - 13.973: 99.6279% ( 1) 00:12:37.027 14.827 - 14.933: 99.6330% ( 1) 00:12:37.027 44.587 - 44.800: 99.6381% ( 1) 00:12:37.027 1740.800 - 1747.627: 99.6432% ( 1) 00:12:37.027 3986.773 - 4014.080: 99.9949% ( 69) 00:12:37.027 5980.160 - 6007.467: 100.0000% ( 1) 00:12:37.027 00:12:37.027 20:44:01 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:37.027 20:44:01 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:37.027 20:44:01 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:37.027 20:44:01 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:37.027 20:44:01 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:37.288 [2024-04-24 20:44:01.852127] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:12:37.288 [ 00:12:37.288 { 00:12:37.288 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:37.288 "subtype": "Discovery", 00:12:37.288 "listen_addresses": [], 00:12:37.288 "allow_any_host": true, 00:12:37.288 "hosts": [] 00:12:37.288 }, 00:12:37.288 { 00:12:37.288 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:37.288 "subtype": "NVMe", 00:12:37.288 "listen_addresses": [ 00:12:37.288 { 00:12:37.288 "transport": "VFIOUSER", 00:12:37.288 "trtype": "VFIOUSER", 00:12:37.288 "adrfam": "IPv4", 00:12:37.288 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:37.288 "trsvcid": "0" 00:12:37.288 } 00:12:37.288 ], 00:12:37.288 "allow_any_host": true, 00:12:37.288 "hosts": [], 00:12:37.288 "serial_number": "SPDK1", 00:12:37.288 "model_number": "SPDK bdev Controller", 00:12:37.288 "max_namespaces": 32, 00:12:37.288 "min_cntlid": 1, 00:12:37.288 "max_cntlid": 65519, 00:12:37.288 "namespaces": [ 00:12:37.288 { 00:12:37.288 "nsid": 1, 00:12:37.288 "bdev_name": "Malloc1", 00:12:37.288 "name": "Malloc1", 00:12:37.288 "nguid": "949D1DC6E16F4F4DBE5E01457F25AB63", 00:12:37.288 "uuid": "949d1dc6-e16f-4f4d-be5e-01457f25ab63" 00:12:37.288 } 00:12:37.288 ] 00:12:37.288 }, 00:12:37.288 { 00:12:37.288 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:37.288 "subtype": "NVMe", 00:12:37.288 "listen_addresses": [ 00:12:37.288 { 00:12:37.288 "transport": "VFIOUSER", 00:12:37.288 "trtype": "VFIOUSER", 00:12:37.288 "adrfam": "IPv4", 00:12:37.288 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:37.288 "trsvcid": "0" 00:12:37.288 } 00:12:37.288 ], 00:12:37.288 "allow_any_host": true, 00:12:37.288 "hosts": [], 00:12:37.288 "serial_number": "SPDK2", 00:12:37.288 "model_number": "SPDK bdev Controller", 00:12:37.288 "max_namespaces": 32, 00:12:37.288 "min_cntlid": 1, 00:12:37.288 "max_cntlid": 65519, 00:12:37.288 "namespaces": [ 00:12:37.288 { 00:12:37.288 "nsid": 1, 00:12:37.288 "bdev_name": "Malloc2", 00:12:37.288 "name": "Malloc2", 00:12:37.288 "nguid": "2FC4C0EFE99647798C10F12B97C37754", 00:12:37.288 "uuid": "2fc4c0ef-e996-4779-8c10-f12b97c37754" 00:12:37.288 } 00:12:37.288 ] 00:12:37.288 } 00:12:37.288 ] 00:12:37.288 20:44:01 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:37.288 20:44:01 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2697672 00:12:37.288 20:44:01 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:37.288 20:44:01 -- common/autotest_common.sh@1251 -- # local i=0 00:12:37.288 20:44:01 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:37.288 20:44:01 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:37.288 20:44:01 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:37.288 20:44:01 -- common/autotest_common.sh@1262 -- # return 0 00:12:37.288 20:44:01 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:37.289 20:44:01 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:37.550 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.550 [2024-04-24 20:44:02.050182] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:37.550 Malloc3 00:12:37.550 20:44:02 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:37.810 [2024-04-24 20:44:02.292077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:37.810 20:44:02 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:37.810 Asynchronous Event Request test 00:12:37.810 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:37.810 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:37.810 Registering asynchronous event callbacks... 00:12:37.810 Starting namespace attribute notice tests for all controllers... 00:12:37.810 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:37.810 aer_cb - Changed Namespace 00:12:37.810 Cleaning up... 00:12:38.073 [ 00:12:38.073 { 00:12:38.073 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:38.073 "subtype": "Discovery", 00:12:38.073 "listen_addresses": [], 00:12:38.073 "allow_any_host": true, 00:12:38.073 "hosts": [] 00:12:38.073 }, 00:12:38.073 { 00:12:38.073 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:38.073 "subtype": "NVMe", 00:12:38.073 "listen_addresses": [ 00:12:38.073 { 00:12:38.073 "transport": "VFIOUSER", 00:12:38.073 "trtype": "VFIOUSER", 00:12:38.073 "adrfam": "IPv4", 00:12:38.073 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:38.073 "trsvcid": "0" 00:12:38.073 } 00:12:38.073 ], 00:12:38.073 "allow_any_host": true, 00:12:38.073 "hosts": [], 00:12:38.073 "serial_number": "SPDK1", 00:12:38.073 "model_number": "SPDK bdev Controller", 00:12:38.073 "max_namespaces": 32, 00:12:38.073 "min_cntlid": 1, 00:12:38.073 "max_cntlid": 65519, 00:12:38.073 "namespaces": [ 00:12:38.073 { 00:12:38.073 "nsid": 1, 00:12:38.073 "bdev_name": "Malloc1", 00:12:38.073 "name": "Malloc1", 00:12:38.073 "nguid": "949D1DC6E16F4F4DBE5E01457F25AB63", 00:12:38.073 "uuid": "949d1dc6-e16f-4f4d-be5e-01457f25ab63" 00:12:38.073 }, 00:12:38.073 { 00:12:38.073 "nsid": 2, 00:12:38.073 "bdev_name": "Malloc3", 00:12:38.073 "name": "Malloc3", 00:12:38.073 "nguid": "D95CE9D9CF1A4AA4BB823565EA2A3CBB", 00:12:38.073 "uuid": "d95ce9d9-cf1a-4aa4-bb82-3565ea2a3cbb" 00:12:38.073 } 00:12:38.073 ] 00:12:38.073 }, 00:12:38.073 { 00:12:38.073 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:38.073 "subtype": "NVMe", 00:12:38.073 "listen_addresses": [ 00:12:38.073 { 00:12:38.073 "transport": "VFIOUSER", 00:12:38.073 "trtype": "VFIOUSER", 00:12:38.073 "adrfam": "IPv4", 00:12:38.073 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:38.073 "trsvcid": "0" 00:12:38.073 } 00:12:38.073 ], 00:12:38.073 "allow_any_host": true, 00:12:38.073 "hosts": [], 00:12:38.073 "serial_number": "SPDK2", 00:12:38.073 "model_number": "SPDK bdev Controller", 00:12:38.073 "max_namespaces": 32, 00:12:38.073 "min_cntlid": 1, 00:12:38.073 "max_cntlid": 65519, 00:12:38.073 "namespaces": [ 00:12:38.073 { 00:12:38.073 "nsid": 1, 00:12:38.073 "bdev_name": "Malloc2", 00:12:38.074 "name": "Malloc2", 00:12:38.074 "nguid": "2FC4C0EFE99647798C10F12B97C37754", 00:12:38.074 "uuid": "2fc4c0ef-e996-4779-8c10-f12b97c37754" 00:12:38.074 } 00:12:38.074 ] 00:12:38.074 } 00:12:38.074 ] 00:12:38.074 20:44:02 -- target/nvmf_vfio_user.sh@44 -- # wait 2697672 00:12:38.074 20:44:02 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:38.074 20:44:02 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:38.074 20:44:02 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:38.074 20:44:02 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:38.074 [2024-04-24 20:44:02.560755] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:12:38.074 [2024-04-24 20:44:02.560797] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2697925 ] 00:12:38.074 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.074 [2024-04-24 20:44:02.592248] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:38.074 [2024-04-24 20:44:02.601017] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:38.074 [2024-04-24 20:44:02.601038] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff740502000 00:12:38.074 [2024-04-24 20:44:02.602013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:38.074 [2024-04-24 20:44:02.603018] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:38.074 [2024-04-24 20:44:02.604028] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:38.074 [2024-04-24 20:44:02.605034] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:38.074 [2024-04-24 20:44:02.606036] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:38.074 [2024-04-24 20:44:02.607042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:38.074 [2024-04-24 20:44:02.608046] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:38.074 [2024-04-24 20:44:02.609054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:38.074 [2024-04-24 20:44:02.610067] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:38.074 [2024-04-24 20:44:02.610080] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff7404f7000 00:12:38.074 [2024-04-24 20:44:02.611406] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:38.074 [2024-04-24 20:44:02.631891] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:38.074 [2024-04-24 20:44:02.631914] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:38.074 [2024-04-24 20:44:02.633965] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:38.074 [2024-04-24 20:44:02.634007] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:38.074 [2024-04-24 20:44:02.634090] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:38.074 [2024-04-24 20:44:02.634105] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:38.074 [2024-04-24 20:44:02.634110] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:38.074 [2024-04-24 20:44:02.634966] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:38.074 [2024-04-24 20:44:02.634976] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:38.074 [2024-04-24 20:44:02.634983] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:38.074 [2024-04-24 20:44:02.635969] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:38.074 [2024-04-24 20:44:02.635978] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:38.074 [2024-04-24 20:44:02.635986] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:38.074 [2024-04-24 20:44:02.636976] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:38.074 [2024-04-24 20:44:02.636985] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:38.074 [2024-04-24 20:44:02.637989] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:38.074 [2024-04-24 20:44:02.637997] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:38.074 [2024-04-24 20:44:02.638002] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:38.074 [2024-04-24 20:44:02.638009] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:38.074 [2024-04-24 20:44:02.638114] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:38.074 [2024-04-24 20:44:02.638118] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:38.074 [2024-04-24 20:44:02.638123] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:38.074 [2024-04-24 20:44:02.638999] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:38.074 [2024-04-24 20:44:02.640005] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:38.074 [2024-04-24 20:44:02.641020] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:38.074 [2024-04-24 20:44:02.642023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:38.074 [2024-04-24 20:44:02.642060] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:38.074 [2024-04-24 20:44:02.643039] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:38.074 [2024-04-24 20:44:02.643047] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:38.074 [2024-04-24 20:44:02.643052] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:38.074 [2024-04-24 20:44:02.643073] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:38.074 [2024-04-24 20:44:02.643080] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:38.074 [2024-04-24 20:44:02.643094] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:38.074 [2024-04-24 20:44:02.643099] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:38.074 [2024-04-24 20:44:02.643111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:38.074 [2024-04-24 20:44:02.649734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:38.074 [2024-04-24 20:44:02.649746] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:38.074 [2024-04-24 20:44:02.649751] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:38.074 [2024-04-24 20:44:02.649755] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:38.074 [2024-04-24 20:44:02.649760] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:38.074 [2024-04-24 20:44:02.649764] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:38.074 [2024-04-24 20:44:02.649769] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:38.074 [2024-04-24 20:44:02.649773] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.649781] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.649790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:38.075 [2024-04-24 20:44:02.657731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:38.075 [2024-04-24 20:44:02.657746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.075 [2024-04-24 20:44:02.657755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.075 [2024-04-24 20:44:02.657765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.075 [2024-04-24 20:44:02.657773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.075 [2024-04-24 20:44:02.657778] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.657786] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.657795] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:38.075 [2024-04-24 20:44:02.665731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:38.075 [2024-04-24 20:44:02.665740] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:38.075 [2024-04-24 20:44:02.665745] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.665754] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.665759] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.665768] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:38.075 [2024-04-24 20:44:02.673732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:38.075 [2024-04-24 20:44:02.673783] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.673790] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.673798] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:38.075 [2024-04-24 20:44:02.673802] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:38.075 [2024-04-24 20:44:02.673808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:38.075 [2024-04-24 20:44:02.681732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:38.075 [2024-04-24 20:44:02.681743] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:38.075 [2024-04-24 20:44:02.681756] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.681763] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.681770] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:38.075 [2024-04-24 20:44:02.681774] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:38.075 [2024-04-24 20:44:02.681780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:38.075 [2024-04-24 20:44:02.689730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:38.075 [2024-04-24 20:44:02.689746] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.689754] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.689761] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:38.075 [2024-04-24 20:44:02.689765] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:38.075 [2024-04-24 20:44:02.689771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:38.075 [2024-04-24 20:44:02.697730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:38.075 [2024-04-24 20:44:02.697740] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.697747] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.697755] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.697760] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.697765] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.697770] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:38.075 [2024-04-24 20:44:02.697775] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:38.075 [2024-04-24 20:44:02.697780] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:38.075 [2024-04-24 20:44:02.697795] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:38.075 [2024-04-24 20:44:02.705732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:38.075 [2024-04-24 20:44:02.705744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:38.337 [2024-04-24 20:44:02.713732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:38.337 [2024-04-24 20:44:02.713746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:38.337 [2024-04-24 20:44:02.721732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:38.337 [2024-04-24 20:44:02.721744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:38.337 [2024-04-24 20:44:02.729729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:38.337 [2024-04-24 20:44:02.729742] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:38.337 [2024-04-24 20:44:02.729746] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:38.337 [2024-04-24 20:44:02.729750] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:38.337 [2024-04-24 20:44:02.729754] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:38.337 [2024-04-24 20:44:02.729765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:38.337 [2024-04-24 20:44:02.729772] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:38.337 [2024-04-24 20:44:02.729777] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:38.337 [2024-04-24 20:44:02.729782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:38.337 [2024-04-24 20:44:02.729789] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:38.337 [2024-04-24 20:44:02.729793] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:38.337 [2024-04-24 20:44:02.729799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:38.337 [2024-04-24 20:44:02.729807] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:38.337 [2024-04-24 20:44:02.729811] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:38.337 [2024-04-24 20:44:02.729817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:38.337 [2024-04-24 20:44:02.737732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:38.337 [2024-04-24 20:44:02.737746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:38.337 [2024-04-24 20:44:02.737755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:38.337 [2024-04-24 20:44:02.737762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:38.337 ===================================================== 00:12:38.337 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:38.337 ===================================================== 00:12:38.337 Controller Capabilities/Features 00:12:38.337 ================================ 00:12:38.337 Vendor ID: 4e58 00:12:38.337 Subsystem Vendor ID: 4e58 00:12:38.337 Serial Number: SPDK2 00:12:38.337 Model Number: SPDK bdev Controller 00:12:38.337 Firmware Version: 24.05 00:12:38.337 Recommended Arb Burst: 6 00:12:38.337 IEEE OUI Identifier: 8d 6b 50 00:12:38.337 Multi-path I/O 00:12:38.337 May have multiple subsystem ports: Yes 00:12:38.337 May have multiple controllers: Yes 00:12:38.337 Associated with SR-IOV VF: No 00:12:38.337 Max Data Transfer Size: 131072 00:12:38.337 Max Number of Namespaces: 32 00:12:38.337 Max Number of I/O Queues: 127 00:12:38.337 NVMe Specification Version (VS): 1.3 00:12:38.337 NVMe Specification Version (Identify): 1.3 00:12:38.337 Maximum Queue Entries: 256 00:12:38.337 Contiguous Queues Required: Yes 00:12:38.337 Arbitration Mechanisms Supported 00:12:38.337 Weighted Round Robin: Not Supported 00:12:38.337 Vendor Specific: Not Supported 00:12:38.337 Reset Timeout: 15000 ms 00:12:38.337 Doorbell Stride: 4 bytes 00:12:38.337 NVM Subsystem Reset: Not Supported 00:12:38.337 Command Sets Supported 00:12:38.337 NVM Command Set: Supported 00:12:38.337 Boot Partition: Not Supported 00:12:38.337 Memory Page Size Minimum: 4096 bytes 00:12:38.337 Memory Page Size Maximum: 4096 bytes 00:12:38.337 Persistent Memory Region: Not Supported 00:12:38.337 Optional Asynchronous Events Supported 00:12:38.337 Namespace Attribute Notices: Supported 00:12:38.337 Firmware Activation Notices: Not Supported 00:12:38.337 ANA Change Notices: Not Supported 00:12:38.337 PLE Aggregate Log Change Notices: Not Supported 00:12:38.337 LBA Status Info Alert Notices: Not Supported 00:12:38.337 EGE Aggregate Log Change Notices: Not Supported 00:12:38.337 Normal NVM Subsystem Shutdown event: Not Supported 00:12:38.337 Zone Descriptor Change Notices: Not Supported 00:12:38.337 Discovery Log Change Notices: Not Supported 00:12:38.337 Controller Attributes 00:12:38.337 128-bit Host Identifier: Supported 00:12:38.337 Non-Operational Permissive Mode: Not Supported 00:12:38.337 NVM Sets: Not Supported 00:12:38.337 Read Recovery Levels: Not Supported 00:12:38.337 Endurance Groups: Not Supported 00:12:38.337 Predictable Latency Mode: Not Supported 00:12:38.337 Traffic Based Keep ALive: Not Supported 00:12:38.337 Namespace Granularity: Not Supported 00:12:38.337 SQ Associations: Not Supported 00:12:38.337 UUID List: Not Supported 00:12:38.337 Multi-Domain Subsystem: Not Supported 00:12:38.338 Fixed Capacity Management: Not Supported 00:12:38.338 Variable Capacity Management: Not Supported 00:12:38.338 Delete Endurance Group: Not Supported 00:12:38.338 Delete NVM Set: Not Supported 00:12:38.338 Extended LBA Formats Supported: Not Supported 00:12:38.338 Flexible Data Placement Supported: Not Supported 00:12:38.338 00:12:38.338 Controller Memory Buffer Support 00:12:38.338 ================================ 00:12:38.338 Supported: No 00:12:38.338 00:12:38.338 Persistent Memory Region Support 00:12:38.338 ================================ 00:12:38.338 Supported: No 00:12:38.338 00:12:38.338 Admin Command Set Attributes 00:12:38.338 ============================ 00:12:38.338 Security Send/Receive: Not Supported 00:12:38.338 Format NVM: Not Supported 00:12:38.338 Firmware Activate/Download: Not Supported 00:12:38.338 Namespace Management: Not Supported 00:12:38.338 Device Self-Test: Not Supported 00:12:38.338 Directives: Not Supported 00:12:38.338 NVMe-MI: Not Supported 00:12:38.338 Virtualization Management: Not Supported 00:12:38.338 Doorbell Buffer Config: Not Supported 00:12:38.338 Get LBA Status Capability: Not Supported 00:12:38.338 Command & Feature Lockdown Capability: Not Supported 00:12:38.338 Abort Command Limit: 4 00:12:38.338 Async Event Request Limit: 4 00:12:38.338 Number of Firmware Slots: N/A 00:12:38.338 Firmware Slot 1 Read-Only: N/A 00:12:38.338 Firmware Activation Without Reset: N/A 00:12:38.338 Multiple Update Detection Support: N/A 00:12:38.338 Firmware Update Granularity: No Information Provided 00:12:38.338 Per-Namespace SMART Log: No 00:12:38.338 Asymmetric Namespace Access Log Page: Not Supported 00:12:38.338 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:38.338 Command Effects Log Page: Supported 00:12:38.338 Get Log Page Extended Data: Supported 00:12:38.338 Telemetry Log Pages: Not Supported 00:12:38.338 Persistent Event Log Pages: Not Supported 00:12:38.338 Supported Log Pages Log Page: May Support 00:12:38.338 Commands Supported & Effects Log Page: Not Supported 00:12:38.338 Feature Identifiers & Effects Log Page:May Support 00:12:38.338 NVMe-MI Commands & Effects Log Page: May Support 00:12:38.338 Data Area 4 for Telemetry Log: Not Supported 00:12:38.338 Error Log Page Entries Supported: 128 00:12:38.338 Keep Alive: Supported 00:12:38.338 Keep Alive Granularity: 10000 ms 00:12:38.338 00:12:38.338 NVM Command Set Attributes 00:12:38.338 ========================== 00:12:38.338 Submission Queue Entry Size 00:12:38.338 Max: 64 00:12:38.338 Min: 64 00:12:38.338 Completion Queue Entry Size 00:12:38.338 Max: 16 00:12:38.338 Min: 16 00:12:38.338 Number of Namespaces: 32 00:12:38.338 Compare Command: Supported 00:12:38.338 Write Uncorrectable Command: Not Supported 00:12:38.338 Dataset Management Command: Supported 00:12:38.338 Write Zeroes Command: Supported 00:12:38.338 Set Features Save Field: Not Supported 00:12:38.338 Reservations: Not Supported 00:12:38.338 Timestamp: Not Supported 00:12:38.338 Copy: Supported 00:12:38.338 Volatile Write Cache: Present 00:12:38.338 Atomic Write Unit (Normal): 1 00:12:38.338 Atomic Write Unit (PFail): 1 00:12:38.338 Atomic Compare & Write Unit: 1 00:12:38.338 Fused Compare & Write: Supported 00:12:38.338 Scatter-Gather List 00:12:38.338 SGL Command Set: Supported (Dword aligned) 00:12:38.338 SGL Keyed: Not Supported 00:12:38.338 SGL Bit Bucket Descriptor: Not Supported 00:12:38.338 SGL Metadata Pointer: Not Supported 00:12:38.338 Oversized SGL: Not Supported 00:12:38.338 SGL Metadata Address: Not Supported 00:12:38.338 SGL Offset: Not Supported 00:12:38.338 Transport SGL Data Block: Not Supported 00:12:38.338 Replay Protected Memory Block: Not Supported 00:12:38.338 00:12:38.338 Firmware Slot Information 00:12:38.338 ========================= 00:12:38.338 Active slot: 1 00:12:38.338 Slot 1 Firmware Revision: 24.05 00:12:38.338 00:12:38.338 00:12:38.338 Commands Supported and Effects 00:12:38.338 ============================== 00:12:38.338 Admin Commands 00:12:38.338 -------------- 00:12:38.338 Get Log Page (02h): Supported 00:12:38.338 Identify (06h): Supported 00:12:38.338 Abort (08h): Supported 00:12:38.338 Set Features (09h): Supported 00:12:38.338 Get Features (0Ah): Supported 00:12:38.338 Asynchronous Event Request (0Ch): Supported 00:12:38.338 Keep Alive (18h): Supported 00:12:38.338 I/O Commands 00:12:38.338 ------------ 00:12:38.338 Flush (00h): Supported LBA-Change 00:12:38.338 Write (01h): Supported LBA-Change 00:12:38.338 Read (02h): Supported 00:12:38.338 Compare (05h): Supported 00:12:38.338 Write Zeroes (08h): Supported LBA-Change 00:12:38.338 Dataset Management (09h): Supported LBA-Change 00:12:38.338 Copy (19h): Supported LBA-Change 00:12:38.338 Unknown (79h): Supported LBA-Change 00:12:38.338 Unknown (7Ah): Supported 00:12:38.338 00:12:38.338 Error Log 00:12:38.338 ========= 00:12:38.338 00:12:38.338 Arbitration 00:12:38.338 =========== 00:12:38.338 Arbitration Burst: 1 00:12:38.338 00:12:38.338 Power Management 00:12:38.338 ================ 00:12:38.338 Number of Power States: 1 00:12:38.338 Current Power State: Power State #0 00:12:38.338 Power State #0: 00:12:38.338 Max Power: 0.00 W 00:12:38.338 Non-Operational State: Operational 00:12:38.338 Entry Latency: Not Reported 00:12:38.338 Exit Latency: Not Reported 00:12:38.338 Relative Read Throughput: 0 00:12:38.338 Relative Read Latency: 0 00:12:38.338 Relative Write Throughput: 0 00:12:38.338 Relative Write Latency: 0 00:12:38.338 Idle Power: Not Reported 00:12:38.338 Active Power: Not Reported 00:12:38.338 Non-Operational Permissive Mode: Not Supported 00:12:38.338 00:12:38.338 Health Information 00:12:38.338 ================== 00:12:38.338 Critical Warnings: 00:12:38.338 Available Spare Space: OK 00:12:38.338 Temperature: OK 00:12:38.338 Device Reliability: OK 00:12:38.338 Read Only: No 00:12:38.338 Volatile Memory Backup: OK 00:12:38.338 Current Temperature: 0 Kelvin (-2[2024-04-24 20:44:02.737862] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:38.338 [2024-04-24 20:44:02.745730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:38.338 [2024-04-24 20:44:02.745757] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:38.338 [2024-04-24 20:44:02.745766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.338 [2024-04-24 20:44:02.745772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.338 [2024-04-24 20:44:02.745778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.338 [2024-04-24 20:44:02.745784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.338 [2024-04-24 20:44:02.745840] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:38.338 [2024-04-24 20:44:02.745850] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:38.338 [2024-04-24 20:44:02.746853] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:38.338 [2024-04-24 20:44:02.746900] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:38.338 [2024-04-24 20:44:02.746906] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:38.338 [2024-04-24 20:44:02.747860] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:38.339 [2024-04-24 20:44:02.747874] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:38.339 [2024-04-24 20:44:02.747925] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:38.339 [2024-04-24 20:44:02.750731] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:38.339 73 Celsius) 00:12:38.339 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:38.339 Available Spare: 0% 00:12:38.339 Available Spare Threshold: 0% 00:12:38.339 Life Percentage Used: 0% 00:12:38.339 Data Units Read: 0 00:12:38.339 Data Units Written: 0 00:12:38.339 Host Read Commands: 0 00:12:38.339 Host Write Commands: 0 00:12:38.339 Controller Busy Time: 0 minutes 00:12:38.339 Power Cycles: 0 00:12:38.339 Power On Hours: 0 hours 00:12:38.339 Unsafe Shutdowns: 0 00:12:38.339 Unrecoverable Media Errors: 0 00:12:38.339 Lifetime Error Log Entries: 0 00:12:38.339 Warning Temperature Time: 0 minutes 00:12:38.339 Critical Temperature Time: 0 minutes 00:12:38.339 00:12:38.339 Number of Queues 00:12:38.339 ================ 00:12:38.339 Number of I/O Submission Queues: 127 00:12:38.339 Number of I/O Completion Queues: 127 00:12:38.339 00:12:38.339 Active Namespaces 00:12:38.339 ================= 00:12:38.339 Namespace ID:1 00:12:38.339 Error Recovery Timeout: Unlimited 00:12:38.339 Command Set Identifier: NVM (00h) 00:12:38.339 Deallocate: Supported 00:12:38.339 Deallocated/Unwritten Error: Not Supported 00:12:38.339 Deallocated Read Value: Unknown 00:12:38.339 Deallocate in Write Zeroes: Not Supported 00:12:38.339 Deallocated Guard Field: 0xFFFF 00:12:38.339 Flush: Supported 00:12:38.339 Reservation: Supported 00:12:38.339 Namespace Sharing Capabilities: Multiple Controllers 00:12:38.339 Size (in LBAs): 131072 (0GiB) 00:12:38.339 Capacity (in LBAs): 131072 (0GiB) 00:12:38.339 Utilization (in LBAs): 131072 (0GiB) 00:12:38.339 NGUID: 2FC4C0EFE99647798C10F12B97C37754 00:12:38.339 UUID: 2fc4c0ef-e996-4779-8c10-f12b97c37754 00:12:38.339 Thin Provisioning: Not Supported 00:12:38.339 Per-NS Atomic Units: Yes 00:12:38.339 Atomic Boundary Size (Normal): 0 00:12:38.339 Atomic Boundary Size (PFail): 0 00:12:38.339 Atomic Boundary Offset: 0 00:12:38.339 Maximum Single Source Range Length: 65535 00:12:38.339 Maximum Copy Length: 65535 00:12:38.339 Maximum Source Range Count: 1 00:12:38.339 NGUID/EUI64 Never Reused: No 00:12:38.339 Namespace Write Protected: No 00:12:38.339 Number of LBA Formats: 1 00:12:38.339 Current LBA Format: LBA Format #00 00:12:38.339 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:38.339 00:12:38.339 20:44:02 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:38.339 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.339 [2024-04-24 20:44:02.942031] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:43.625 [2024-04-24 20:44:08.046930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:43.625 Initializing NVMe Controllers 00:12:43.625 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:43.625 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:43.625 Initialization complete. Launching workers. 00:12:43.625 ======================================================== 00:12:43.625 Latency(us) 00:12:43.625 Device Information : IOPS MiB/s Average min max 00:12:43.625 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 44102.52 172.28 2901.60 904.20 7712.33 00:12:43.625 ======================================================== 00:12:43.625 Total : 44102.52 172.28 2901.60 904.20 7712.33 00:12:43.625 00:12:43.625 20:44:08 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:43.625 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.625 [2024-04-24 20:44:08.240564] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:48.910 [2024-04-24 20:44:13.261595] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:48.910 Initializing NVMe Controllers 00:12:48.910 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:48.910 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:48.910 Initialization complete. Launching workers. 00:12:48.910 ======================================================== 00:12:48.910 Latency(us) 00:12:48.910 Device Information : IOPS MiB/s Average min max 00:12:48.910 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34818.21 136.01 3675.40 1208.90 7606.23 00:12:48.910 ======================================================== 00:12:48.910 Total : 34818.21 136.01 3675.40 1208.90 7606.23 00:12:48.910 00:12:48.910 20:44:13 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:48.910 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.910 [2024-04-24 20:44:13.485995] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:54.196 [2024-04-24 20:44:18.622835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:54.196 Initializing NVMe Controllers 00:12:54.196 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:54.196 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:54.196 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:54.196 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:54.196 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:54.196 Initialization complete. Launching workers. 00:12:54.196 Starting thread on core 2 00:12:54.196 Starting thread on core 3 00:12:54.196 Starting thread on core 1 00:12:54.196 20:44:18 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:54.196 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.480 [2024-04-24 20:44:18.900234] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.795 [2024-04-24 20:44:21.956549] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.795 Initializing NVMe Controllers 00:12:57.795 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.795 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.795 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:57.795 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:57.795 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:57.795 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:57.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:57.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:57.795 Initialization complete. Launching workers. 00:12:57.795 Starting thread on core 1 with urgent priority queue 00:12:57.795 Starting thread on core 2 with urgent priority queue 00:12:57.795 Starting thread on core 3 with urgent priority queue 00:12:57.795 Starting thread on core 0 with urgent priority queue 00:12:57.795 SPDK bdev Controller (SPDK2 ) core 0: 9963.00 IO/s 10.04 secs/100000 ios 00:12:57.795 SPDK bdev Controller (SPDK2 ) core 1: 9161.67 IO/s 10.92 secs/100000 ios 00:12:57.795 SPDK bdev Controller (SPDK2 ) core 2: 12066.67 IO/s 8.29 secs/100000 ios 00:12:57.795 SPDK bdev Controller (SPDK2 ) core 3: 12888.33 IO/s 7.76 secs/100000 ios 00:12:57.795 ======================================================== 00:12:57.795 00:12:57.795 20:44:22 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:57.795 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.795 [2024-04-24 20:44:22.217183] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:57.795 [2024-04-24 20:44:22.226234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:57.795 Initializing NVMe Controllers 00:12:57.795 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.795 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:57.795 Namespace ID: 1 size: 0GB 00:12:57.795 Initialization complete. 00:12:57.795 INFO: using host memory buffer for IO 00:12:57.795 Hello world! 00:12:57.795 20:44:22 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:57.795 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.054 [2024-04-24 20:44:22.479651] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:59.003 Initializing NVMe Controllers 00:12:59.003 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:59.003 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:59.003 Initialization complete. Launching workers. 00:12:59.003 submit (in ns) avg, min, max = 8760.5, 3885.8, 4105597.5 00:12:59.003 complete (in ns) avg, min, max = 22314.8, 2367.5, 3999230.0 00:12:59.003 00:12:59.003 Submit histogram 00:12:59.003 ================ 00:12:59.003 Range in us Cumulative Count 00:12:59.003 3.867 - 3.893: 0.1178% ( 18) 00:12:59.003 3.893 - 3.920: 2.8992% ( 425) 00:12:59.003 3.920 - 3.947: 9.7971% ( 1054) 00:12:59.003 3.947 - 3.973: 19.5288% ( 1487) 00:12:59.003 3.973 - 4.000: 31.0929% ( 1767) 00:12:59.003 4.000 - 4.027: 42.8599% ( 1798) 00:12:59.003 4.027 - 4.053: 56.6427% ( 2106) 00:12:59.003 4.053 - 4.080: 71.7016% ( 2301) 00:12:59.003 4.080 - 4.107: 84.8822% ( 2014) 00:12:59.003 4.107 - 4.133: 93.3246% ( 1290) 00:12:59.003 4.133 - 4.160: 97.5196% ( 641) 00:12:59.003 4.160 - 4.187: 98.9136% ( 213) 00:12:59.003 4.187 - 4.213: 99.2866% ( 57) 00:12:59.003 4.213 - 4.240: 99.3783% ( 14) 00:12:59.003 4.240 - 4.267: 99.3979% ( 3) 00:12:59.003 4.267 - 4.293: 99.4306% ( 5) 00:12:59.003 4.293 - 4.320: 99.4503% ( 3) 00:12:59.003 4.347 - 4.373: 99.4568% ( 1) 00:12:59.003 4.373 - 4.400: 99.4699% ( 2) 00:12:59.003 4.480 - 4.507: 99.4764% ( 1) 00:12:59.003 4.613 - 4.640: 99.4830% ( 1) 00:12:59.003 4.640 - 4.667: 99.4895% ( 1) 00:12:59.003 4.853 - 4.880: 99.4961% ( 1) 00:12:59.003 4.907 - 4.933: 99.5026% ( 1) 00:12:59.003 4.987 - 5.013: 99.5157% ( 2) 00:12:59.003 5.067 - 5.093: 99.5223% ( 1) 00:12:59.003 5.413 - 5.440: 99.5353% ( 2) 00:12:59.003 5.600 - 5.627: 99.5419% ( 1) 00:12:59.003 5.707 - 5.733: 99.5484% ( 1) 00:12:59.003 5.840 - 5.867: 99.5550% ( 1) 00:12:59.003 5.867 - 5.893: 99.5615% ( 1) 00:12:59.003 5.973 - 6.000: 99.5681% ( 1) 00:12:59.003 6.000 - 6.027: 99.5746% ( 1) 00:12:59.003 6.027 - 6.053: 99.5812% ( 1) 00:12:59.003 6.080 - 6.107: 99.5942% ( 2) 00:12:59.003 6.107 - 6.133: 99.6008% ( 1) 00:12:59.003 6.133 - 6.160: 99.6204% ( 3) 00:12:59.003 6.160 - 6.187: 99.6270% ( 1) 00:12:59.003 6.267 - 6.293: 99.6335% ( 1) 00:12:59.003 6.293 - 6.320: 99.6401% ( 1) 00:12:59.003 6.373 - 6.400: 99.6466% ( 1) 00:12:59.003 6.400 - 6.427: 99.6531% ( 1) 00:12:59.003 6.480 - 6.507: 99.6597% ( 1) 00:12:59.003 6.560 - 6.587: 99.6793% ( 3) 00:12:59.003 6.587 - 6.613: 99.6859% ( 1) 00:12:59.003 6.613 - 6.640: 99.6924% ( 1) 00:12:59.003 6.640 - 6.667: 99.6990% ( 1) 00:12:59.003 6.667 - 6.693: 99.7055% ( 1) 00:12:59.003 6.693 - 6.720: 99.7120% ( 1) 00:12:59.003 6.720 - 6.747: 99.7251% ( 2) 00:12:59.003 6.880 - 6.933: 99.7317% ( 1) 00:12:59.003 6.933 - 6.987: 99.7448% ( 2) 00:12:59.003 6.987 - 7.040: 99.7513% ( 1) 00:12:59.003 7.093 - 7.147: 99.7644% ( 2) 00:12:59.003 7.147 - 7.200: 99.7709% ( 1) 00:12:59.003 7.200 - 7.253: 99.7906% ( 3) 00:12:59.003 7.253 - 7.307: 99.8037% ( 2) 00:12:59.003 7.307 - 7.360: 99.8102% ( 1) 00:12:59.003 7.360 - 7.413: 99.8233% ( 2) 00:12:59.003 7.413 - 7.467: 99.8298% ( 1) 00:12:59.003 7.467 - 7.520: 99.8364% ( 1) 00:12:59.003 7.520 - 7.573: 99.8429% ( 1) 00:12:59.003 7.733 - 7.787: 99.8495% ( 1) 00:12:59.003 7.840 - 7.893: 99.8560% ( 1) 00:12:59.003 8.000 - 8.053: 99.8626% ( 1) 00:12:59.004 8.107 - 8.160: 99.8691% ( 1) 00:12:59.004 8.853 - 8.907: 99.8757% ( 1) 00:12:59.004 14.613 - 14.720: 99.8822% ( 1) 00:12:59.004 3986.773 - 4014.080: 99.9935% ( 17) 00:12:59.004 4096.000 - 4123.307: 100.0000% ( 1) 00:12:59.004 00:12:59.004 Complete histogram 00:12:59.004 ================== 00:12:59.004 Range in us Cumulative Count 00:12:59.004 2.360 - 2.373: 1.1387% ( 174) 00:12:59.004 2.373 - 2.387: 2.8730% ( 265) 00:12:59.004 2.387 - 2.400: 3.1152% ( 37) 00:12:59.004 2.400 - 2.413: 40.0654% ( 5646) 00:12:59.004 2.413 - 2.427: 60.3010% ( 3092) 00:12:59.004 2.427 - 2.440: 69.6401% ( 1427) 00:12:59.004 2.440 - 2.453: 78.1675% ( 1303) 00:12:59.004 2.453 - [2024-04-24 20:44:23.575355] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:59.004 2.467: 81.8390% ( 561) 00:12:59.004 2.467 - 2.480: 83.2134% ( 210) 00:12:59.004 2.480 - 2.493: 87.0942% ( 593) 00:12:59.004 2.493 - 2.507: 92.1204% ( 768) 00:12:59.004 2.507 - 2.520: 95.7003% ( 547) 00:12:59.004 2.520 - 2.533: 97.6374% ( 296) 00:12:59.004 2.533 - 2.547: 98.6911% ( 161) 00:12:59.004 2.547 - 2.560: 99.1623% ( 72) 00:12:59.004 2.560 - 2.573: 99.2932% ( 20) 00:12:59.004 2.573 - 2.587: 99.3128% ( 3) 00:12:59.004 2.587 - 2.600: 99.3325% ( 3) 00:12:59.004 4.267 - 4.293: 99.3390% ( 1) 00:12:59.004 4.400 - 4.427: 99.3455% ( 1) 00:12:59.004 4.907 - 4.933: 99.3521% ( 1) 00:12:59.004 4.933 - 4.960: 99.3586% ( 1) 00:12:59.004 5.013 - 5.040: 99.3652% ( 1) 00:12:59.004 5.173 - 5.200: 99.3717% ( 1) 00:12:59.004 5.200 - 5.227: 99.3783% ( 1) 00:12:59.004 5.227 - 5.253: 99.3848% ( 1) 00:12:59.004 5.253 - 5.280: 99.3914% ( 1) 00:12:59.004 5.387 - 5.413: 99.3979% ( 1) 00:12:59.004 5.493 - 5.520: 99.4045% ( 1) 00:12:59.004 5.520 - 5.547: 99.4110% ( 1) 00:12:59.004 5.600 - 5.627: 99.4175% ( 1) 00:12:59.004 5.680 - 5.707: 99.4306% ( 2) 00:12:59.004 5.733 - 5.760: 99.4372% ( 1) 00:12:59.004 5.867 - 5.893: 99.4437% ( 1) 00:12:59.004 6.160 - 6.187: 99.4503% ( 1) 00:12:59.004 6.187 - 6.213: 99.4634% ( 2) 00:12:59.004 6.747 - 6.773: 99.4699% ( 1) 00:12:59.004 8.053 - 8.107: 99.4764% ( 1) 00:12:59.004 9.280 - 9.333: 99.4830% ( 1) 00:12:59.004 14.187 - 14.293: 99.4895% ( 1) 00:12:59.004 33.280 - 33.493: 99.4961% ( 1) 00:12:59.004 43.733 - 43.947: 99.5026% ( 1) 00:12:59.004 3986.773 - 4014.080: 100.0000% ( 76) 00:12:59.004 00:12:59.004 20:44:23 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:59.004 20:44:23 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:59.004 20:44:23 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:59.004 20:44:23 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:59.004 20:44:23 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:59.264 [ 00:12:59.264 { 00:12:59.264 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:59.264 "subtype": "Discovery", 00:12:59.264 "listen_addresses": [], 00:12:59.264 "allow_any_host": true, 00:12:59.264 "hosts": [] 00:12:59.264 }, 00:12:59.264 { 00:12:59.264 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:59.264 "subtype": "NVMe", 00:12:59.264 "listen_addresses": [ 00:12:59.264 { 00:12:59.264 "transport": "VFIOUSER", 00:12:59.264 "trtype": "VFIOUSER", 00:12:59.264 "adrfam": "IPv4", 00:12:59.264 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:59.264 "trsvcid": "0" 00:12:59.264 } 00:12:59.264 ], 00:12:59.264 "allow_any_host": true, 00:12:59.264 "hosts": [], 00:12:59.264 "serial_number": "SPDK1", 00:12:59.264 "model_number": "SPDK bdev Controller", 00:12:59.264 "max_namespaces": 32, 00:12:59.264 "min_cntlid": 1, 00:12:59.264 "max_cntlid": 65519, 00:12:59.264 "namespaces": [ 00:12:59.264 { 00:12:59.264 "nsid": 1, 00:12:59.264 "bdev_name": "Malloc1", 00:12:59.264 "name": "Malloc1", 00:12:59.264 "nguid": "949D1DC6E16F4F4DBE5E01457F25AB63", 00:12:59.264 "uuid": "949d1dc6-e16f-4f4d-be5e-01457f25ab63" 00:12:59.264 }, 00:12:59.264 { 00:12:59.264 "nsid": 2, 00:12:59.264 "bdev_name": "Malloc3", 00:12:59.264 "name": "Malloc3", 00:12:59.264 "nguid": "D95CE9D9CF1A4AA4BB823565EA2A3CBB", 00:12:59.264 "uuid": "d95ce9d9-cf1a-4aa4-bb82-3565ea2a3cbb" 00:12:59.264 } 00:12:59.264 ] 00:12:59.264 }, 00:12:59.264 { 00:12:59.264 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:59.264 "subtype": "NVMe", 00:12:59.264 "listen_addresses": [ 00:12:59.264 { 00:12:59.264 "transport": "VFIOUSER", 00:12:59.264 "trtype": "VFIOUSER", 00:12:59.264 "adrfam": "IPv4", 00:12:59.264 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:59.264 "trsvcid": "0" 00:12:59.264 } 00:12:59.264 ], 00:12:59.264 "allow_any_host": true, 00:12:59.264 "hosts": [], 00:12:59.264 "serial_number": "SPDK2", 00:12:59.264 "model_number": "SPDK bdev Controller", 00:12:59.264 "max_namespaces": 32, 00:12:59.264 "min_cntlid": 1, 00:12:59.264 "max_cntlid": 65519, 00:12:59.264 "namespaces": [ 00:12:59.264 { 00:12:59.264 "nsid": 1, 00:12:59.264 "bdev_name": "Malloc2", 00:12:59.264 "name": "Malloc2", 00:12:59.264 "nguid": "2FC4C0EFE99647798C10F12B97C37754", 00:12:59.264 "uuid": "2fc4c0ef-e996-4779-8c10-f12b97c37754" 00:12:59.264 } 00:12:59.264 ] 00:12:59.264 } 00:12:59.264 ] 00:12:59.264 20:44:23 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:59.264 20:44:23 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2702037 00:12:59.264 20:44:23 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:59.264 20:44:23 -- common/autotest_common.sh@1251 -- # local i=0 00:12:59.264 20:44:23 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:59.264 20:44:23 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:59.264 20:44:23 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:59.264 20:44:23 -- common/autotest_common.sh@1262 -- # return 0 00:12:59.264 20:44:23 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:59.264 20:44:23 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:59.264 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.525 [2024-04-24 20:44:24.005118] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:59.525 Malloc4 00:12:59.525 20:44:24 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:59.785 [2024-04-24 20:44:24.249737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:59.785 20:44:24 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:59.785 Asynchronous Event Request test 00:12:59.785 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:59.785 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:59.785 Registering asynchronous event callbacks... 00:12:59.785 Starting namespace attribute notice tests for all controllers... 00:12:59.785 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:59.785 aer_cb - Changed Namespace 00:12:59.785 Cleaning up... 00:13:00.045 [ 00:13:00.045 { 00:13:00.045 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:00.045 "subtype": "Discovery", 00:13:00.045 "listen_addresses": [], 00:13:00.045 "allow_any_host": true, 00:13:00.045 "hosts": [] 00:13:00.045 }, 00:13:00.045 { 00:13:00.045 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:00.045 "subtype": "NVMe", 00:13:00.045 "listen_addresses": [ 00:13:00.045 { 00:13:00.045 "transport": "VFIOUSER", 00:13:00.045 "trtype": "VFIOUSER", 00:13:00.045 "adrfam": "IPv4", 00:13:00.045 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:00.045 "trsvcid": "0" 00:13:00.045 } 00:13:00.045 ], 00:13:00.045 "allow_any_host": true, 00:13:00.045 "hosts": [], 00:13:00.045 "serial_number": "SPDK1", 00:13:00.045 "model_number": "SPDK bdev Controller", 00:13:00.045 "max_namespaces": 32, 00:13:00.045 "min_cntlid": 1, 00:13:00.045 "max_cntlid": 65519, 00:13:00.045 "namespaces": [ 00:13:00.045 { 00:13:00.045 "nsid": 1, 00:13:00.045 "bdev_name": "Malloc1", 00:13:00.045 "name": "Malloc1", 00:13:00.045 "nguid": "949D1DC6E16F4F4DBE5E01457F25AB63", 00:13:00.045 "uuid": "949d1dc6-e16f-4f4d-be5e-01457f25ab63" 00:13:00.045 }, 00:13:00.045 { 00:13:00.045 "nsid": 2, 00:13:00.045 "bdev_name": "Malloc3", 00:13:00.045 "name": "Malloc3", 00:13:00.045 "nguid": "D95CE9D9CF1A4AA4BB823565EA2A3CBB", 00:13:00.045 "uuid": "d95ce9d9-cf1a-4aa4-bb82-3565ea2a3cbb" 00:13:00.045 } 00:13:00.045 ] 00:13:00.045 }, 00:13:00.045 { 00:13:00.045 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:00.045 "subtype": "NVMe", 00:13:00.045 "listen_addresses": [ 00:13:00.045 { 00:13:00.045 "transport": "VFIOUSER", 00:13:00.045 "trtype": "VFIOUSER", 00:13:00.045 "adrfam": "IPv4", 00:13:00.045 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:00.045 "trsvcid": "0" 00:13:00.045 } 00:13:00.045 ], 00:13:00.045 "allow_any_host": true, 00:13:00.045 "hosts": [], 00:13:00.045 "serial_number": "SPDK2", 00:13:00.045 "model_number": "SPDK bdev Controller", 00:13:00.045 "max_namespaces": 32, 00:13:00.045 "min_cntlid": 1, 00:13:00.045 "max_cntlid": 65519, 00:13:00.045 "namespaces": [ 00:13:00.045 { 00:13:00.045 "nsid": 1, 00:13:00.045 "bdev_name": "Malloc2", 00:13:00.045 "name": "Malloc2", 00:13:00.045 "nguid": "2FC4C0EFE99647798C10F12B97C37754", 00:13:00.045 "uuid": "2fc4c0ef-e996-4779-8c10-f12b97c37754" 00:13:00.045 }, 00:13:00.045 { 00:13:00.045 "nsid": 2, 00:13:00.045 "bdev_name": "Malloc4", 00:13:00.045 "name": "Malloc4", 00:13:00.045 "nguid": "9D2BCDAE65B541DB9F732374E63C7037", 00:13:00.045 "uuid": "9d2bcdae-65b5-41db-9f73-2374e63c7037" 00:13:00.045 } 00:13:00.045 ] 00:13:00.045 } 00:13:00.045 ] 00:13:00.045 20:44:24 -- target/nvmf_vfio_user.sh@44 -- # wait 2702037 00:13:00.045 20:44:24 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:00.045 20:44:24 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2692940 00:13:00.045 20:44:24 -- common/autotest_common.sh@936 -- # '[' -z 2692940 ']' 00:13:00.045 20:44:24 -- common/autotest_common.sh@940 -- # kill -0 2692940 00:13:00.045 20:44:24 -- common/autotest_common.sh@941 -- # uname 00:13:00.045 20:44:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:00.045 20:44:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2692940 00:13:00.045 20:44:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:00.045 20:44:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:00.045 20:44:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2692940' 00:13:00.045 killing process with pid 2692940 00:13:00.045 20:44:24 -- common/autotest_common.sh@955 -- # kill 2692940 00:13:00.045 [2024-04-24 20:44:24.548831] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:13:00.045 20:44:24 -- common/autotest_common.sh@960 -- # wait 2692940 00:13:00.305 20:44:24 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:00.305 20:44:24 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:00.305 20:44:24 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:00.305 20:44:24 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:00.305 20:44:24 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:00.305 20:44:24 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2702222 00:13:00.305 20:44:24 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:00.305 20:44:24 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2702222' 00:13:00.305 Process pid: 2702222 00:13:00.305 20:44:24 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:00.305 20:44:24 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2702222 00:13:00.305 20:44:24 -- common/autotest_common.sh@817 -- # '[' -z 2702222 ']' 00:13:00.305 20:44:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.305 20:44:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:00.305 20:44:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.305 20:44:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:00.306 20:44:24 -- common/autotest_common.sh@10 -- # set +x 00:13:00.306 [2024-04-24 20:44:24.756329] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:00.306 [2024-04-24 20:44:24.757247] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:13:00.306 [2024-04-24 20:44:24.757291] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.306 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.306 [2024-04-24 20:44:24.814150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:00.306 [2024-04-24 20:44:24.877373] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.306 [2024-04-24 20:44:24.877410] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.306 [2024-04-24 20:44:24.877418] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:00.306 [2024-04-24 20:44:24.877426] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:00.306 [2024-04-24 20:44:24.877433] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.306 [2024-04-24 20:44:24.877558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:00.306 [2024-04-24 20:44:24.877698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.306 [2024-04-24 20:44:24.877851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.306 [2024-04-24 20:44:24.877852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.306 [2024-04-24 20:44:24.941851] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:13:00.306 [2024-04-24 20:44:24.942029] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:13:00.306 [2024-04-24 20:44:24.942295] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:13:00.306 [2024-04-24 20:44:24.942471] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:00.306 [2024-04-24 20:44:24.942562] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:13:00.565 20:44:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:00.565 20:44:24 -- common/autotest_common.sh@850 -- # return 0 00:13:00.565 20:44:24 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:01.504 20:44:25 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:01.764 20:44:26 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:01.764 20:44:26 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:01.764 20:44:26 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:01.764 20:44:26 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:01.764 20:44:26 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:01.764 Malloc1 00:13:02.025 20:44:26 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:02.025 20:44:26 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:02.286 20:44:26 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:02.547 20:44:27 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:02.547 20:44:27 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:02.547 20:44:27 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:02.807 Malloc2 00:13:02.807 20:44:27 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:03.068 20:44:27 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:03.068 20:44:27 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:03.328 20:44:27 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:03.328 20:44:27 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2702222 00:13:03.328 20:44:27 -- common/autotest_common.sh@936 -- # '[' -z 2702222 ']' 00:13:03.328 20:44:27 -- common/autotest_common.sh@940 -- # kill -0 2702222 00:13:03.328 20:44:27 -- common/autotest_common.sh@941 -- # uname 00:13:03.328 20:44:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:03.328 20:44:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2702222 00:13:03.588 20:44:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:03.588 20:44:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:03.588 20:44:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2702222' 00:13:03.588 killing process with pid 2702222 00:13:03.588 20:44:27 -- common/autotest_common.sh@955 -- # kill 2702222 00:13:03.588 20:44:27 -- common/autotest_common.sh@960 -- # wait 2702222 00:13:03.588 [2024-04-24 20:44:28.070316] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:13:03.588 20:44:28 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:03.588 20:44:28 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:03.588 00:13:03.588 real 0m51.426s 00:13:03.588 user 3m24.452s 00:13:03.588 sys 0m3.061s 00:13:03.588 20:44:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:03.588 20:44:28 -- common/autotest_common.sh@10 -- # set +x 00:13:03.588 ************************************ 00:13:03.588 END TEST nvmf_vfio_user 00:13:03.588 ************************************ 00:13:03.588 20:44:28 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:03.588 20:44:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:03.588 20:44:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:03.588 20:44:28 -- common/autotest_common.sh@10 -- # set +x 00:13:03.849 ************************************ 00:13:03.849 START TEST nvmf_vfio_user_nvme_compliance 00:13:03.849 ************************************ 00:13:03.849 20:44:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:03.849 * Looking for test storage... 00:13:03.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:03.849 20:44:28 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.849 20:44:28 -- nvmf/common.sh@7 -- # uname -s 00:13:03.849 20:44:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.849 20:44:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.849 20:44:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.849 20:44:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.849 20:44:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.849 20:44:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.849 20:44:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.849 20:44:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.849 20:44:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.849 20:44:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.849 20:44:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:03.849 20:44:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:03.849 20:44:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.849 20:44:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.849 20:44:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.849 20:44:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.849 20:44:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.849 20:44:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.849 20:44:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.849 20:44:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.849 20:44:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.849 20:44:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.849 20:44:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.849 20:44:28 -- paths/export.sh@5 -- # export PATH 00:13:03.849 20:44:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.849 20:44:28 -- nvmf/common.sh@47 -- # : 0 00:13:03.849 20:44:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:03.849 20:44:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:03.849 20:44:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.849 20:44:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.849 20:44:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.849 20:44:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:03.849 20:44:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:03.849 20:44:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:03.849 20:44:28 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:03.849 20:44:28 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:03.849 20:44:28 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:03.850 20:44:28 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:03.850 20:44:28 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:03.850 20:44:28 -- compliance/compliance.sh@20 -- # nvmfpid=2703124 00:13:03.850 20:44:28 -- compliance/compliance.sh@21 -- # echo 'Process pid: 2703124' 00:13:03.850 Process pid: 2703124 00:13:03.850 20:44:28 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:03.850 20:44:28 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:03.850 20:44:28 -- compliance/compliance.sh@24 -- # waitforlisten 2703124 00:13:03.850 20:44:28 -- common/autotest_common.sh@817 -- # '[' -z 2703124 ']' 00:13:03.850 20:44:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.850 20:44:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:03.850 20:44:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.850 20:44:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:03.850 20:44:28 -- common/autotest_common.sh@10 -- # set +x 00:13:04.110 [2024-04-24 20:44:28.536346] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:13:04.110 [2024-04-24 20:44:28.536411] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.110 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.110 [2024-04-24 20:44:28.616376] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:04.110 [2024-04-24 20:44:28.685939] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.110 [2024-04-24 20:44:28.685979] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.110 [2024-04-24 20:44:28.685986] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.110 [2024-04-24 20:44:28.685993] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.110 [2024-04-24 20:44:28.685998] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.110 [2024-04-24 20:44:28.686109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.110 [2024-04-24 20:44:28.686245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:04.110 [2024-04-24 20:44:28.686247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.050 20:44:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:05.050 20:44:29 -- common/autotest_common.sh@850 -- # return 0 00:13:05.050 20:44:29 -- compliance/compliance.sh@26 -- # sleep 1 00:13:05.990 20:44:30 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:05.990 20:44:30 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:05.990 20:44:30 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:05.990 20:44:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.990 20:44:30 -- common/autotest_common.sh@10 -- # set +x 00:13:05.990 20:44:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.990 20:44:30 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:05.990 20:44:30 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:05.990 20:44:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.990 20:44:30 -- common/autotest_common.sh@10 -- # set +x 00:13:05.990 malloc0 00:13:05.990 20:44:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.990 20:44:30 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:05.990 20:44:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.990 20:44:30 -- common/autotest_common.sh@10 -- # set +x 00:13:05.990 20:44:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.990 20:44:30 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:05.990 20:44:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.990 20:44:30 -- common/autotest_common.sh@10 -- # set +x 00:13:05.991 20:44:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.991 20:44:30 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:05.991 20:44:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:05.991 20:44:30 -- common/autotest_common.sh@10 -- # set +x 00:13:05.991 20:44:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:05.991 20:44:30 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:05.991 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.991 00:13:05.991 00:13:05.991 CUnit - A unit testing framework for C - Version 2.1-3 00:13:05.991 http://cunit.sourceforge.net/ 00:13:05.991 00:13:05.991 00:13:05.991 Suite: nvme_compliance 00:13:06.251 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-24 20:44:30.673309] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.251 [2024-04-24 20:44:30.674670] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:06.251 [2024-04-24 20:44:30.674684] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:06.251 [2024-04-24 20:44:30.674690] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:06.251 [2024-04-24 20:44:30.676329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.251 passed 00:13:06.251 Test: admin_identify_ctrlr_verify_fused ...[2024-04-24 20:44:30.768921] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.251 [2024-04-24 20:44:30.771945] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.251 passed 00:13:06.251 Test: admin_identify_ns ...[2024-04-24 20:44:30.868023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.512 [2024-04-24 20:44:30.927740] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:06.512 [2024-04-24 20:44:30.935733] ctrlr.c:2656:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:06.512 [2024-04-24 20:44:30.956843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.512 passed 00:13:06.512 Test: admin_get_features_mandatory_features ...[2024-04-24 20:44:31.050506] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.512 [2024-04-24 20:44:31.053534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.512 passed 00:13:06.512 Test: admin_get_features_optional_features ...[2024-04-24 20:44:31.148070] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.512 [2024-04-24 20:44:31.151090] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.772 passed 00:13:06.772 Test: admin_set_features_number_of_queues ...[2024-04-24 20:44:31.244205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:06.772 [2024-04-24 20:44:31.348837] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:06.772 passed 00:13:07.033 Test: admin_get_log_page_mandatory_logs ...[2024-04-24 20:44:31.440834] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.033 [2024-04-24 20:44:31.443851] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.033 passed 00:13:07.033 Test: admin_get_log_page_with_lpo ...[2024-04-24 20:44:31.536962] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.033 [2024-04-24 20:44:31.608739] ctrlr.c:2604:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:07.033 [2024-04-24 20:44:31.621781] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.033 passed 00:13:07.295 Test: fabric_property_get ...[2024-04-24 20:44:31.713412] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.295 [2024-04-24 20:44:31.714695] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:07.295 [2024-04-24 20:44:31.716435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.295 passed 00:13:07.295 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-24 20:44:31.810006] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.295 [2024-04-24 20:44:31.811238] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:07.295 [2024-04-24 20:44:31.813022] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.295 passed 00:13:07.295 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-24 20:44:31.907128] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.556 [2024-04-24 20:44:31.990731] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:07.556 [2024-04-24 20:44:32.006732] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:07.556 [2024-04-24 20:44:32.011829] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.556 passed 00:13:07.556 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-24 20:44:32.103821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.556 [2024-04-24 20:44:32.105039] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:07.556 [2024-04-24 20:44:32.106835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.556 passed 00:13:07.815 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-24 20:44:32.198974] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.815 [2024-04-24 20:44:32.278739] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:07.815 [2024-04-24 20:44:32.302730] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:07.815 [2024-04-24 20:44:32.307824] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.815 passed 00:13:07.815 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-24 20:44:32.399396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:07.815 [2024-04-24 20:44:32.400619] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:07.815 [2024-04-24 20:44:32.400637] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:07.815 [2024-04-24 20:44:32.402413] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:07.815 passed 00:13:08.078 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-24 20:44:32.496496] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:08.078 [2024-04-24 20:44:32.587733] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:08.078 [2024-04-24 20:44:32.595732] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:08.078 [2024-04-24 20:44:32.603737] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:08.078 [2024-04-24 20:44:32.611730] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:08.078 [2024-04-24 20:44:32.640819] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.078 passed 00:13:08.338 Test: admin_create_io_sq_verify_pc ...[2024-04-24 20:44:32.732368] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:08.338 [2024-04-24 20:44:32.748739] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:08.338 [2024-04-24 20:44:32.766512] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:08.338 passed 00:13:08.338 Test: admin_create_io_qp_max_qps ...[2024-04-24 20:44:32.860045] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.722 [2024-04-24 20:44:33.976737] nvme_ctrlr.c:5329:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:09.722 [2024-04-24 20:44:34.356064] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:09.982 passed 00:13:09.982 Test: admin_create_io_sq_shared_cq ...[2024-04-24 20:44:34.449276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:09.982 [2024-04-24 20:44:34.579733] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:09.982 [2024-04-24 20:44:34.617786] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:10.243 passed 00:13:10.243 00:13:10.243 Run Summary: Type Total Ran Passed Failed Inactive 00:13:10.243 suites 1 1 n/a 0 0 00:13:10.243 tests 18 18 18 0 0 00:13:10.243 asserts 360 360 360 0 n/a 00:13:10.243 00:13:10.243 Elapsed time = 1.654 seconds 00:13:10.243 20:44:34 -- compliance/compliance.sh@42 -- # killprocess 2703124 00:13:10.243 20:44:34 -- common/autotest_common.sh@936 -- # '[' -z 2703124 ']' 00:13:10.243 20:44:34 -- common/autotest_common.sh@940 -- # kill -0 2703124 00:13:10.243 20:44:34 -- common/autotest_common.sh@941 -- # uname 00:13:10.243 20:44:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:10.243 20:44:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2703124 00:13:10.243 20:44:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:10.243 20:44:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:10.243 20:44:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2703124' 00:13:10.243 killing process with pid 2703124 00:13:10.243 20:44:34 -- common/autotest_common.sh@955 -- # kill 2703124 00:13:10.243 20:44:34 -- common/autotest_common.sh@960 -- # wait 2703124 00:13:10.243 20:44:34 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:10.243 20:44:34 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:10.243 00:13:10.243 real 0m6.526s 00:13:10.243 user 0m18.726s 00:13:10.243 sys 0m0.484s 00:13:10.243 20:44:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:10.243 20:44:34 -- common/autotest_common.sh@10 -- # set +x 00:13:10.243 ************************************ 00:13:10.243 END TEST nvmf_vfio_user_nvme_compliance 00:13:10.243 ************************************ 00:13:10.503 20:44:34 -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:10.503 20:44:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:10.503 20:44:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:10.503 20:44:34 -- common/autotest_common.sh@10 -- # set +x 00:13:10.503 ************************************ 00:13:10.503 START TEST nvmf_vfio_user_fuzz 00:13:10.503 ************************************ 00:13:10.503 20:44:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:10.763 * Looking for test storage... 00:13:10.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.763 20:44:35 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.763 20:44:35 -- nvmf/common.sh@7 -- # uname -s 00:13:10.763 20:44:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.763 20:44:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.763 20:44:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.763 20:44:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.763 20:44:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.763 20:44:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.763 20:44:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.763 20:44:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.763 20:44:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.763 20:44:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.763 20:44:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:10.763 20:44:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:10.763 20:44:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.763 20:44:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.763 20:44:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.763 20:44:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.764 20:44:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.764 20:44:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.764 20:44:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.764 20:44:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.764 20:44:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.764 20:44:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.764 20:44:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.764 20:44:35 -- paths/export.sh@5 -- # export PATH 00:13:10.764 20:44:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.764 20:44:35 -- nvmf/common.sh@47 -- # : 0 00:13:10.764 20:44:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.764 20:44:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.764 20:44:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.764 20:44:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.764 20:44:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.764 20:44:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.764 20:44:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.764 20:44:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.764 20:44:35 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:10.764 20:44:35 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:10.764 20:44:35 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:10.764 20:44:35 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:10.764 20:44:35 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:10.764 20:44:35 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:10.764 20:44:35 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:10.764 20:44:35 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2704463 00:13:10.764 20:44:35 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2704463' 00:13:10.764 Process pid: 2704463 00:13:10.764 20:44:35 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:10.764 20:44:35 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2704463 00:13:10.764 20:44:35 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:10.764 20:44:35 -- common/autotest_common.sh@817 -- # '[' -z 2704463 ']' 00:13:10.764 20:44:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.764 20:44:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:10.764 20:44:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.764 20:44:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:10.764 20:44:35 -- common/autotest_common.sh@10 -- # set +x 00:13:11.703 20:44:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:11.703 20:44:36 -- common/autotest_common.sh@850 -- # return 0 00:13:11.703 20:44:36 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:12.642 20:44:37 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:12.642 20:44:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.642 20:44:37 -- common/autotest_common.sh@10 -- # set +x 00:13:12.642 20:44:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.642 20:44:37 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:12.642 20:44:37 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:12.642 20:44:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.642 20:44:37 -- common/autotest_common.sh@10 -- # set +x 00:13:12.642 malloc0 00:13:12.642 20:44:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.642 20:44:37 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:12.642 20:44:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.642 20:44:37 -- common/autotest_common.sh@10 -- # set +x 00:13:12.642 20:44:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.642 20:44:37 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:12.642 20:44:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.642 20:44:37 -- common/autotest_common.sh@10 -- # set +x 00:13:12.642 20:44:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.642 20:44:37 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:12.642 20:44:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:12.642 20:44:37 -- common/autotest_common.sh@10 -- # set +x 00:13:12.642 20:44:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:12.642 20:44:37 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:12.642 20:44:37 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:44.747 Fuzzing completed. Shutting down the fuzz application 00:13:44.747 00:13:44.747 Dumping successful admin opcodes: 00:13:44.747 8, 9, 10, 24, 00:13:44.747 Dumping successful io opcodes: 00:13:44.747 0, 00:13:44.747 NS: 0x200003a1ef00 I/O qp, Total commands completed: 950646, total successful commands: 3721, random_seed: 3292021696 00:13:44.747 NS: 0x200003a1ef00 admin qp, Total commands completed: 231510, total successful commands: 1854, random_seed: 2847755328 00:13:44.747 20:45:07 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:44.747 20:45:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:44.747 20:45:07 -- common/autotest_common.sh@10 -- # set +x 00:13:44.747 20:45:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:44.747 20:45:07 -- target/vfio_user_fuzz.sh@46 -- # killprocess 2704463 00:13:44.747 20:45:07 -- common/autotest_common.sh@936 -- # '[' -z 2704463 ']' 00:13:44.747 20:45:07 -- common/autotest_common.sh@940 -- # kill -0 2704463 00:13:44.747 20:45:07 -- common/autotest_common.sh@941 -- # uname 00:13:44.747 20:45:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:44.747 20:45:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2704463 00:13:44.747 20:45:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:44.747 20:45:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:44.747 20:45:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2704463' 00:13:44.747 killing process with pid 2704463 00:13:44.747 20:45:07 -- common/autotest_common.sh@955 -- # kill 2704463 00:13:44.747 20:45:07 -- common/autotest_common.sh@960 -- # wait 2704463 00:13:44.748 20:45:07 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:44.748 20:45:07 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:44.748 00:13:44.748 real 0m32.762s 00:13:44.748 user 0m37.836s 00:13:44.748 sys 0m23.400s 00:13:44.748 20:45:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:44.748 20:45:07 -- common/autotest_common.sh@10 -- # set +x 00:13:44.748 ************************************ 00:13:44.748 END TEST nvmf_vfio_user_fuzz 00:13:44.748 ************************************ 00:13:44.748 20:45:07 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:44.748 20:45:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:44.748 20:45:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:44.748 20:45:07 -- common/autotest_common.sh@10 -- # set +x 00:13:44.748 ************************************ 00:13:44.748 START TEST nvmf_host_management 00:13:44.748 ************************************ 00:13:44.748 20:45:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:44.748 * Looking for test storage... 00:13:44.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:44.748 20:45:08 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.748 20:45:08 -- nvmf/common.sh@7 -- # uname -s 00:13:44.748 20:45:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.748 20:45:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.748 20:45:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.748 20:45:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.748 20:45:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.748 20:45:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.748 20:45:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.748 20:45:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.748 20:45:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.748 20:45:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.748 20:45:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:44.748 20:45:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:44.748 20:45:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.748 20:45:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.748 20:45:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.748 20:45:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.748 20:45:08 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:44.748 20:45:08 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.748 20:45:08 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.748 20:45:08 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.748 20:45:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.748 20:45:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.748 20:45:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.748 20:45:08 -- paths/export.sh@5 -- # export PATH 00:13:44.748 20:45:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.748 20:45:08 -- nvmf/common.sh@47 -- # : 0 00:13:44.748 20:45:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.748 20:45:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.748 20:45:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.748 20:45:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.748 20:45:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.748 20:45:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.748 20:45:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.748 20:45:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.748 20:45:08 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:44.748 20:45:08 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:44.748 20:45:08 -- target/host_management.sh@105 -- # nvmftestinit 00:13:44.748 20:45:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:44.748 20:45:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.748 20:45:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:44.748 20:45:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:44.748 20:45:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:44.748 20:45:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.748 20:45:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:44.748 20:45:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.748 20:45:08 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:44.748 20:45:08 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:44.748 20:45:08 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.748 20:45:08 -- common/autotest_common.sh@10 -- # set +x 00:13:51.341 20:45:15 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:51.341 20:45:15 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.341 20:45:15 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.341 20:45:15 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.341 20:45:15 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.341 20:45:15 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.341 20:45:15 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.341 20:45:15 -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.341 20:45:15 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.341 20:45:15 -- nvmf/common.sh@296 -- # e810=() 00:13:51.341 20:45:15 -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.341 20:45:15 -- nvmf/common.sh@297 -- # x722=() 00:13:51.341 20:45:15 -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.341 20:45:15 -- nvmf/common.sh@298 -- # mlx=() 00:13:51.341 20:45:15 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.341 20:45:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.341 20:45:15 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.341 20:45:15 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.341 20:45:15 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.341 20:45:15 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.341 20:45:15 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.341 20:45:15 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.341 20:45:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.341 20:45:15 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.341 20:45:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.341 20:45:15 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.341 20:45:15 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.341 20:45:15 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:51.341 20:45:15 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.341 20:45:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.341 20:45:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:51.341 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:51.341 20:45:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.341 20:45:15 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:51.341 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:51.341 20:45:15 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.341 20:45:15 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.341 20:45:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.341 20:45:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:51.341 20:45:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.341 20:45:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:51.341 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:51.341 20:45:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.341 20:45:15 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.341 20:45:15 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.341 20:45:15 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:51.341 20:45:15 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.341 20:45:15 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:51.341 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:51.341 20:45:15 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.341 20:45:15 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:51.341 20:45:15 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:51.341 20:45:15 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:51.341 20:45:15 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:51.341 20:45:15 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.341 20:45:15 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.341 20:45:15 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.341 20:45:15 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:51.341 20:45:15 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.341 20:45:15 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.341 20:45:15 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:51.341 20:45:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.341 20:45:15 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.341 20:45:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:51.341 20:45:15 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:51.341 20:45:15 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.341 20:45:15 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.341 20:45:15 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.341 20:45:15 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.341 20:45:15 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:51.341 20:45:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.341 20:45:15 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.341 20:45:15 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.341 20:45:15 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:51.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:13:51.341 00:13:51.341 --- 10.0.0.2 ping statistics --- 00:13:51.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.341 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:13:51.341 20:45:15 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:13:51.341 00:13:51.341 --- 10.0.0.1 ping statistics --- 00:13:51.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.341 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:13:51.341 20:45:15 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.342 20:45:15 -- nvmf/common.sh@411 -- # return 0 00:13:51.342 20:45:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:51.342 20:45:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.342 20:45:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:51.342 20:45:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:51.342 20:45:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.342 20:45:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:51.342 20:45:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:51.342 20:45:15 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:13:51.342 20:45:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:51.342 20:45:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:51.342 20:45:15 -- common/autotest_common.sh@10 -- # set +x 00:13:51.342 ************************************ 00:13:51.342 START TEST nvmf_host_management 00:13:51.342 ************************************ 00:13:51.342 20:45:15 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:13:51.342 20:45:15 -- target/host_management.sh@69 -- # starttarget 00:13:51.342 20:45:15 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:51.342 20:45:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:51.342 20:45:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:51.342 20:45:15 -- common/autotest_common.sh@10 -- # set +x 00:13:51.342 20:45:15 -- nvmf/common.sh@470 -- # nvmfpid=2715104 00:13:51.342 20:45:15 -- nvmf/common.sh@471 -- # waitforlisten 2715104 00:13:51.342 20:45:15 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:51.342 20:45:15 -- common/autotest_common.sh@817 -- # '[' -z 2715104 ']' 00:13:51.342 20:45:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.342 20:45:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:51.342 20:45:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.342 20:45:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:51.342 20:45:15 -- common/autotest_common.sh@10 -- # set +x 00:13:51.342 [2024-04-24 20:45:15.686327] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:13:51.342 [2024-04-24 20:45:15.686421] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.342 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.342 [2024-04-24 20:45:15.761108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.342 [2024-04-24 20:45:15.835690] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.342 [2024-04-24 20:45:15.835737] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.342 [2024-04-24 20:45:15.835746] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.342 [2024-04-24 20:45:15.835753] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.342 [2024-04-24 20:45:15.835759] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.342 [2024-04-24 20:45:15.835893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.342 [2024-04-24 20:45:15.836175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.342 [2024-04-24 20:45:15.836334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.342 [2024-04-24 20:45:15.836334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:52.288 20:45:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:52.288 20:45:16 -- common/autotest_common.sh@850 -- # return 0 00:13:52.288 20:45:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:52.288 20:45:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:52.289 20:45:16 -- common/autotest_common.sh@10 -- # set +x 00:13:52.289 20:45:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.289 20:45:16 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.289 20:45:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:52.289 20:45:16 -- common/autotest_common.sh@10 -- # set +x 00:13:52.289 [2024-04-24 20:45:16.614691] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.289 20:45:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:52.289 20:45:16 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:52.289 20:45:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:52.289 20:45:16 -- common/autotest_common.sh@10 -- # set +x 00:13:52.289 20:45:16 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:52.289 20:45:16 -- target/host_management.sh@23 -- # cat 00:13:52.289 20:45:16 -- target/host_management.sh@30 -- # rpc_cmd 00:13:52.289 20:45:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:52.289 20:45:16 -- common/autotest_common.sh@10 -- # set +x 00:13:52.289 Malloc0 00:13:52.289 [2024-04-24 20:45:16.678040] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.289 20:45:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:52.289 20:45:16 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:52.289 20:45:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:52.289 20:45:16 -- common/autotest_common.sh@10 -- # set +x 00:13:52.289 20:45:16 -- target/host_management.sh@73 -- # perfpid=2715469 00:13:52.289 20:45:16 -- target/host_management.sh@74 -- # waitforlisten 2715469 /var/tmp/bdevperf.sock 00:13:52.289 20:45:16 -- common/autotest_common.sh@817 -- # '[' -z 2715469 ']' 00:13:52.289 20:45:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:52.289 20:45:16 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:52.289 20:45:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:52.289 20:45:16 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:52.289 20:45:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:52.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:52.289 20:45:16 -- nvmf/common.sh@521 -- # config=() 00:13:52.289 20:45:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:52.289 20:45:16 -- nvmf/common.sh@521 -- # local subsystem config 00:13:52.289 20:45:16 -- common/autotest_common.sh@10 -- # set +x 00:13:52.289 20:45:16 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:52.289 20:45:16 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:52.289 { 00:13:52.289 "params": { 00:13:52.289 "name": "Nvme$subsystem", 00:13:52.289 "trtype": "$TEST_TRANSPORT", 00:13:52.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.289 "adrfam": "ipv4", 00:13:52.289 "trsvcid": "$NVMF_PORT", 00:13:52.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.289 "hdgst": ${hdgst:-false}, 00:13:52.289 "ddgst": ${ddgst:-false} 00:13:52.289 }, 00:13:52.289 "method": "bdev_nvme_attach_controller" 00:13:52.289 } 00:13:52.289 EOF 00:13:52.289 )") 00:13:52.289 20:45:16 -- nvmf/common.sh@543 -- # cat 00:13:52.289 20:45:16 -- nvmf/common.sh@545 -- # jq . 00:13:52.289 20:45:16 -- nvmf/common.sh@546 -- # IFS=, 00:13:52.289 20:45:16 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:52.289 "params": { 00:13:52.289 "name": "Nvme0", 00:13:52.289 "trtype": "tcp", 00:13:52.289 "traddr": "10.0.0.2", 00:13:52.289 "adrfam": "ipv4", 00:13:52.289 "trsvcid": "4420", 00:13:52.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:52.289 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:52.289 "hdgst": false, 00:13:52.289 "ddgst": false 00:13:52.289 }, 00:13:52.289 "method": "bdev_nvme_attach_controller" 00:13:52.289 }' 00:13:52.289 [2024-04-24 20:45:16.775338] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:13:52.289 [2024-04-24 20:45:16.775388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715469 ] 00:13:52.289 EAL: No free 2048 kB hugepages reported on node 1 00:13:52.289 [2024-04-24 20:45:16.852070] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.289 [2024-04-24 20:45:16.914976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.550 Running I/O for 10 seconds... 00:13:53.124 20:45:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:53.124 20:45:17 -- common/autotest_common.sh@850 -- # return 0 00:13:53.124 20:45:17 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:53.124 20:45:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.124 20:45:17 -- common/autotest_common.sh@10 -- # set +x 00:13:53.124 20:45:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.124 20:45:17 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:53.124 20:45:17 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:53.124 20:45:17 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:53.124 20:45:17 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:53.124 20:45:17 -- target/host_management.sh@52 -- # local ret=1 00:13:53.124 20:45:17 -- target/host_management.sh@53 -- # local i 00:13:53.124 20:45:17 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:53.124 20:45:17 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:53.124 20:45:17 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:53.124 20:45:17 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:53.124 20:45:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.124 20:45:17 -- common/autotest_common.sh@10 -- # set +x 00:13:53.124 20:45:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.124 20:45:17 -- target/host_management.sh@55 -- # read_io_count=1038 00:13:53.124 20:45:17 -- target/host_management.sh@58 -- # '[' 1038 -ge 100 ']' 00:13:53.124 20:45:17 -- target/host_management.sh@59 -- # ret=0 00:13:53.124 20:45:17 -- target/host_management.sh@60 -- # break 00:13:53.124 20:45:17 -- target/host_management.sh@64 -- # return 0 00:13:53.124 20:45:17 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:53.124 20:45:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.124 20:45:17 -- common/autotest_common.sh@10 -- # set +x 00:13:53.124 [2024-04-24 20:45:17.733400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.124 [2024-04-24 20:45:17.733448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.124 [2024-04-24 20:45:17.733456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.124 [2024-04-24 20:45:17.733463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.124 [2024-04-24 20:45:17.733469] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.124 [2024-04-24 20:45:17.733476] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.124 [2024-04-24 20:45:17.733482] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.124 [2024-04-24 20:45:17.733488] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.124 [2024-04-24 20:45:17.733495] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.124 [2024-04-24 20:45:17.733501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.124 [2024-04-24 20:45:17.733507] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.125 [2024-04-24 20:45:17.733513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.125 [2024-04-24 20:45:17.733520] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.125 [2024-04-24 20:45:17.733526] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.125 [2024-04-24 20:45:17.733533] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f8d270 is same with the state(5) to be set 00:13:53.125 [2024-04-24 20:45:17.737972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.125 [2024-04-24 20:45:17.738014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.738024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.125 [2024-04-24 20:45:17.738032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.738045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.125 [2024-04-24 20:45:17.738053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.738061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:53.125 [2024-04-24 20:45:17.738068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.738076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bddf30 is same with the state(5) to be set 00:13:53.125 20:45:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.125 20:45:17 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:53.125 20:45:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:53.125 20:45:17 -- common/autotest_common.sh@10 -- # set +x 00:13:53.125 [2024-04-24 20:45:17.745426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.125 [2024-04-24 20:45:17.745939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.125 [2024-04-24 20:45:17.745947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.745957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.745965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.745975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.745983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.745992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:53.126 [2024-04-24 20:45:17.746578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:53.126 [2024-04-24 20:45:17.746629] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1feeab0 was disconnected and freed. reset controller. 00:13:53.126 [2024-04-24 20:45:17.747805] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:53.127 task offset: 16384 on job bdev=Nvme0n1 fails 00:13:53.127 00:13:53.127 Latency(us) 00:13:53.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.127 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:53.127 Job: Nvme0n1 ended in about 0.68 seconds with error 00:13:53.127 Verification LBA range: start 0x0 length 0x400 00:13:53.127 Nvme0n1 : 0.68 1691.62 105.73 93.98 0.00 35026.71 1652.05 32331.09 00:13:53.127 =================================================================================================================== 00:13:53.127 Total : 1691.62 105.73 93.98 0.00 35026.71 1652.05 32331.09 00:13:53.127 [2024-04-24 20:45:17.749787] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:53.127 [2024-04-24 20:45:17.749809] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bddf30 (9): Bad file descriptor 00:13:53.127 20:45:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:53.127 20:45:17 -- target/host_management.sh@87 -- # sleep 1 00:13:53.387 [2024-04-24 20:45:17.883829] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:54.331 20:45:18 -- target/host_management.sh@91 -- # kill -9 2715469 00:13:54.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2715469) - No such process 00:13:54.331 20:45:18 -- target/host_management.sh@91 -- # true 00:13:54.331 20:45:18 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:54.331 20:45:18 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:54.331 20:45:18 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:54.331 20:45:18 -- nvmf/common.sh@521 -- # config=() 00:13:54.331 20:45:18 -- nvmf/common.sh@521 -- # local subsystem config 00:13:54.331 20:45:18 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:13:54.331 20:45:18 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:13:54.331 { 00:13:54.331 "params": { 00:13:54.331 "name": "Nvme$subsystem", 00:13:54.331 "trtype": "$TEST_TRANSPORT", 00:13:54.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:54.331 "adrfam": "ipv4", 00:13:54.331 "trsvcid": "$NVMF_PORT", 00:13:54.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:54.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:54.331 "hdgst": ${hdgst:-false}, 00:13:54.331 "ddgst": ${ddgst:-false} 00:13:54.331 }, 00:13:54.331 "method": "bdev_nvme_attach_controller" 00:13:54.331 } 00:13:54.331 EOF 00:13:54.331 )") 00:13:54.331 20:45:18 -- nvmf/common.sh@543 -- # cat 00:13:54.331 20:45:18 -- nvmf/common.sh@545 -- # jq . 00:13:54.331 20:45:18 -- nvmf/common.sh@546 -- # IFS=, 00:13:54.331 20:45:18 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:13:54.331 "params": { 00:13:54.331 "name": "Nvme0", 00:13:54.331 "trtype": "tcp", 00:13:54.331 "traddr": "10.0.0.2", 00:13:54.331 "adrfam": "ipv4", 00:13:54.331 "trsvcid": "4420", 00:13:54.331 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:54.331 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:54.331 "hdgst": false, 00:13:54.331 "ddgst": false 00:13:54.331 }, 00:13:54.331 "method": "bdev_nvme_attach_controller" 00:13:54.331 }' 00:13:54.331 [2024-04-24 20:45:18.805083] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:13:54.331 [2024-04-24 20:45:18.805136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2715828 ] 00:13:54.331 EAL: No free 2048 kB hugepages reported on node 1 00:13:54.331 [2024-04-24 20:45:18.880049] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.331 [2024-04-24 20:45:18.942506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.592 Running I/O for 1 seconds... 00:13:55.534 00:13:55.534 Latency(us) 00:13:55.534 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.534 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:55.534 Verification LBA range: start 0x0 length 0x400 00:13:55.535 Nvme0n1 : 1.00 1533.07 95.82 0.00 0.00 41022.87 4942.51 33860.27 00:13:55.535 =================================================================================================================== 00:13:55.535 Total : 1533.07 95.82 0.00 0.00 41022.87 4942.51 33860.27 00:13:55.795 20:45:20 -- target/host_management.sh@102 -- # stoptarget 00:13:55.795 20:45:20 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:55.795 20:45:20 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:55.795 20:45:20 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:55.795 20:45:20 -- target/host_management.sh@40 -- # nvmftestfini 00:13:55.795 20:45:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:55.795 20:45:20 -- nvmf/common.sh@117 -- # sync 00:13:55.795 20:45:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:55.795 20:45:20 -- nvmf/common.sh@120 -- # set +e 00:13:55.795 20:45:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:55.795 20:45:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:55.795 rmmod nvme_tcp 00:13:55.795 rmmod nvme_fabrics 00:13:55.795 rmmod nvme_keyring 00:13:55.795 20:45:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:55.795 20:45:20 -- nvmf/common.sh@124 -- # set -e 00:13:55.795 20:45:20 -- nvmf/common.sh@125 -- # return 0 00:13:55.795 20:45:20 -- nvmf/common.sh@478 -- # '[' -n 2715104 ']' 00:13:55.795 20:45:20 -- nvmf/common.sh@479 -- # killprocess 2715104 00:13:55.795 20:45:20 -- common/autotest_common.sh@936 -- # '[' -z 2715104 ']' 00:13:55.795 20:45:20 -- common/autotest_common.sh@940 -- # kill -0 2715104 00:13:55.795 20:45:20 -- common/autotest_common.sh@941 -- # uname 00:13:55.795 20:45:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:55.795 20:45:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2715104 00:13:55.795 20:45:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:55.795 20:45:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:55.795 20:45:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2715104' 00:13:55.795 killing process with pid 2715104 00:13:55.795 20:45:20 -- common/autotest_common.sh@955 -- # kill 2715104 00:13:55.796 20:45:20 -- common/autotest_common.sh@960 -- # wait 2715104 00:13:56.056 [2024-04-24 20:45:20.476486] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:56.056 20:45:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:56.056 20:45:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:56.056 20:45:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:56.056 20:45:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.056 20:45:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:56.056 20:45:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.056 20:45:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.056 20:45:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.969 20:45:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.969 00:13:57.969 real 0m6.951s 00:13:57.969 user 0m21.435s 00:13:57.969 sys 0m1.087s 00:13:57.969 20:45:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:57.969 20:45:22 -- common/autotest_common.sh@10 -- # set +x 00:13:57.969 ************************************ 00:13:57.969 END TEST nvmf_host_management 00:13:57.969 ************************************ 00:13:58.230 20:45:22 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:58.230 00:13:58.230 real 0m14.594s 00:13:58.230 user 0m23.512s 00:13:58.230 sys 0m6.588s 00:13:58.230 20:45:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:58.230 20:45:22 -- common/autotest_common.sh@10 -- # set +x 00:13:58.230 ************************************ 00:13:58.230 END TEST nvmf_host_management 00:13:58.230 ************************************ 00:13:58.230 20:45:22 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:58.230 20:45:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:58.230 20:45:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:58.230 20:45:22 -- common/autotest_common.sh@10 -- # set +x 00:13:58.230 ************************************ 00:13:58.230 START TEST nvmf_lvol 00:13:58.230 ************************************ 00:13:58.230 20:45:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:58.491 * Looking for test storage... 00:13:58.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.491 20:45:22 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.491 20:45:22 -- nvmf/common.sh@7 -- # uname -s 00:13:58.491 20:45:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.491 20:45:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.491 20:45:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.491 20:45:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.491 20:45:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.491 20:45:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.491 20:45:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.491 20:45:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.491 20:45:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.491 20:45:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.491 20:45:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:58.491 20:45:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:13:58.491 20:45:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.491 20:45:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.491 20:45:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.491 20:45:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.491 20:45:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.491 20:45:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.491 20:45:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.491 20:45:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.492 20:45:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.492 20:45:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.492 20:45:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.492 20:45:22 -- paths/export.sh@5 -- # export PATH 00:13:58.492 20:45:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.492 20:45:22 -- nvmf/common.sh@47 -- # : 0 00:13:58.492 20:45:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.492 20:45:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.492 20:45:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.492 20:45:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.492 20:45:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.492 20:45:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.492 20:45:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.492 20:45:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.492 20:45:22 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:58.492 20:45:22 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:58.492 20:45:22 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:58.492 20:45:22 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:58.492 20:45:22 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.492 20:45:22 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:58.492 20:45:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:58.492 20:45:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.492 20:45:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:58.492 20:45:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:58.492 20:45:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:58.492 20:45:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.492 20:45:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.492 20:45:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.492 20:45:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:13:58.492 20:45:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:58.492 20:45:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.492 20:45:22 -- common/autotest_common.sh@10 -- # set +x 00:14:05.076 20:45:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:05.076 20:45:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.076 20:45:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.076 20:45:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.076 20:45:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.076 20:45:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.076 20:45:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.076 20:45:29 -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.076 20:45:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.076 20:45:29 -- nvmf/common.sh@296 -- # e810=() 00:14:05.076 20:45:29 -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.076 20:45:29 -- nvmf/common.sh@297 -- # x722=() 00:14:05.076 20:45:29 -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.076 20:45:29 -- nvmf/common.sh@298 -- # mlx=() 00:14:05.076 20:45:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.076 20:45:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.076 20:45:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.076 20:45:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.076 20:45:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.076 20:45:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.076 20:45:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.076 20:45:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.076 20:45:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.076 20:45:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.076 20:45:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.076 20:45:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.076 20:45:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.076 20:45:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:05.076 20:45:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.076 20:45:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.076 20:45:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:05.076 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:05.076 20:45:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.076 20:45:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:05.076 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:05.076 20:45:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.076 20:45:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.076 20:45:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.076 20:45:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:05.076 20:45:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.076 20:45:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:05.076 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:05.076 20:45:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.076 20:45:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.076 20:45:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.076 20:45:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:05.076 20:45:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.076 20:45:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:05.076 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:05.076 20:45:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.076 20:45:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:05.076 20:45:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:05.076 20:45:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:05.076 20:45:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:05.076 20:45:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.076 20:45:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.076 20:45:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.076 20:45:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:05.076 20:45:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.076 20:45:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.076 20:45:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:05.076 20:45:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.076 20:45:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.076 20:45:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:05.076 20:45:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:05.076 20:45:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.076 20:45:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.337 20:45:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.337 20:45:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.337 20:45:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:05.337 20:45:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.337 20:45:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.337 20:45:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.337 20:45:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:05.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:14:05.337 00:14:05.338 --- 10.0.0.2 ping statistics --- 00:14:05.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.338 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:14:05.338 20:45:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:14:05.338 00:14:05.338 --- 10.0.0.1 ping statistics --- 00:14:05.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.338 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:14:05.338 20:45:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.338 20:45:29 -- nvmf/common.sh@411 -- # return 0 00:14:05.338 20:45:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:05.338 20:45:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.338 20:45:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:05.338 20:45:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:05.338 20:45:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.338 20:45:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:05.338 20:45:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:05.598 20:45:29 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:05.598 20:45:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:05.598 20:45:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:05.598 20:45:29 -- common/autotest_common.sh@10 -- # set +x 00:14:05.598 20:45:29 -- nvmf/common.sh@470 -- # nvmfpid=2720211 00:14:05.598 20:45:30 -- nvmf/common.sh@471 -- # waitforlisten 2720211 00:14:05.598 20:45:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:05.598 20:45:30 -- common/autotest_common.sh@817 -- # '[' -z 2720211 ']' 00:14:05.598 20:45:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.598 20:45:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:05.598 20:45:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.598 20:45:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:05.598 20:45:30 -- common/autotest_common.sh@10 -- # set +x 00:14:05.598 [2024-04-24 20:45:30.058049] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:14:05.598 [2024-04-24 20:45:30.058115] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.598 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.598 [2024-04-24 20:45:30.164822] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.598 [2024-04-24 20:45:30.237963] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.598 [2024-04-24 20:45:30.238014] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.598 [2024-04-24 20:45:30.238025] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.598 [2024-04-24 20:45:30.238035] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.598 [2024-04-24 20:45:30.238040] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.859 [2024-04-24 20:45:30.239741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.859 [2024-04-24 20:45:30.239794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.859 [2024-04-24 20:45:30.239799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.429 20:45:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:06.429 20:45:30 -- common/autotest_common.sh@850 -- # return 0 00:14:06.429 20:45:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:06.429 20:45:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:06.429 20:45:30 -- common/autotest_common.sh@10 -- # set +x 00:14:06.429 20:45:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.429 20:45:30 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:06.690 [2024-04-24 20:45:31.157241] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.690 20:45:31 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:06.949 20:45:31 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:06.949 20:45:31 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.210 20:45:31 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:07.210 20:45:31 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:07.470 20:45:31 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:07.470 20:45:32 -- target/nvmf_lvol.sh@29 -- # lvs=6b6cf2bb-557e-47aa-9330-73937d01d3b0 00:14:07.470 20:45:32 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6b6cf2bb-557e-47aa-9330-73937d01d3b0 lvol 20 00:14:07.731 20:45:32 -- target/nvmf_lvol.sh@32 -- # lvol=61a5790c-1a65-41ac-91ae-0e99eb323e9c 00:14:07.731 20:45:32 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:07.991 20:45:32 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 61a5790c-1a65-41ac-91ae-0e99eb323e9c 00:14:08.252 20:45:32 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:08.252 [2024-04-24 20:45:32.863448] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:08.516 20:45:32 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:08.516 20:45:33 -- target/nvmf_lvol.sh@42 -- # perf_pid=2720891 00:14:08.516 20:45:33 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:08.516 20:45:33 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:08.516 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.898 20:45:34 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 61a5790c-1a65-41ac-91ae-0e99eb323e9c MY_SNAPSHOT 00:14:09.898 20:45:34 -- target/nvmf_lvol.sh@47 -- # snapshot=b664890f-97a5-40ec-bbe8-a9ea8ecb1604 00:14:09.898 20:45:34 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 61a5790c-1a65-41ac-91ae-0e99eb323e9c 30 00:14:10.158 20:45:34 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b664890f-97a5-40ec-bbe8-a9ea8ecb1604 MY_CLONE 00:14:10.418 20:45:34 -- target/nvmf_lvol.sh@49 -- # clone=714e5e6e-3f82-4a06-908b-6055b96c3afc 00:14:10.419 20:45:34 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 714e5e6e-3f82-4a06-908b-6055b96c3afc 00:14:10.990 20:45:35 -- target/nvmf_lvol.sh@53 -- # wait 2720891 00:14:19.125 Initializing NVMe Controllers 00:14:19.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:19.125 Controller IO queue size 128, less than required. 00:14:19.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:19.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:19.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:19.125 Initialization complete. Launching workers. 00:14:19.125 ======================================================== 00:14:19.125 Latency(us) 00:14:19.125 Device Information : IOPS MiB/s Average min max 00:14:19.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11629.01 45.43 11009.65 1575.70 55418.28 00:14:19.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11721.41 45.79 10922.43 3792.76 57768.91 00:14:19.125 ======================================================== 00:14:19.125 Total : 23350.42 91.21 10965.86 1575.70 57768.91 00:14:19.125 00:14:19.125 20:45:43 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:19.125 20:45:43 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 61a5790c-1a65-41ac-91ae-0e99eb323e9c 00:14:19.386 20:45:43 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6b6cf2bb-557e-47aa-9330-73937d01d3b0 00:14:19.646 20:45:44 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:19.646 20:45:44 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:19.646 20:45:44 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:19.646 20:45:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:19.646 20:45:44 -- nvmf/common.sh@117 -- # sync 00:14:19.646 20:45:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:19.646 20:45:44 -- nvmf/common.sh@120 -- # set +e 00:14:19.646 20:45:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:19.646 20:45:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:19.646 rmmod nvme_tcp 00:14:19.646 rmmod nvme_fabrics 00:14:19.646 rmmod nvme_keyring 00:14:19.646 20:45:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:19.646 20:45:44 -- nvmf/common.sh@124 -- # set -e 00:14:19.646 20:45:44 -- nvmf/common.sh@125 -- # return 0 00:14:19.646 20:45:44 -- nvmf/common.sh@478 -- # '[' -n 2720211 ']' 00:14:19.646 20:45:44 -- nvmf/common.sh@479 -- # killprocess 2720211 00:14:19.646 20:45:44 -- common/autotest_common.sh@936 -- # '[' -z 2720211 ']' 00:14:19.646 20:45:44 -- common/autotest_common.sh@940 -- # kill -0 2720211 00:14:19.646 20:45:44 -- common/autotest_common.sh@941 -- # uname 00:14:19.647 20:45:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:19.647 20:45:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2720211 00:14:19.647 20:45:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:19.647 20:45:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:19.647 20:45:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2720211' 00:14:19.647 killing process with pid 2720211 00:14:19.647 20:45:44 -- common/autotest_common.sh@955 -- # kill 2720211 00:14:19.647 20:45:44 -- common/autotest_common.sh@960 -- # wait 2720211 00:14:19.908 20:45:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:19.908 20:45:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:19.908 20:45:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:19.908 20:45:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:19.909 20:45:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:19.909 20:45:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.909 20:45:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.909 20:45:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.870 20:45:46 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:21.870 00:14:21.870 real 0m23.605s 00:14:21.870 user 1m6.144s 00:14:21.870 sys 0m7.669s 00:14:21.870 20:45:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:21.870 20:45:46 -- common/autotest_common.sh@10 -- # set +x 00:14:21.870 ************************************ 00:14:21.870 END TEST nvmf_lvol 00:14:21.870 ************************************ 00:14:21.870 20:45:46 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:21.870 20:45:46 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:21.870 20:45:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:21.870 20:45:46 -- common/autotest_common.sh@10 -- # set +x 00:14:22.131 ************************************ 00:14:22.131 START TEST nvmf_lvs_grow 00:14:22.131 ************************************ 00:14:22.131 20:45:46 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:22.131 * Looking for test storage... 00:14:22.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:22.131 20:45:46 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.131 20:45:46 -- nvmf/common.sh@7 -- # uname -s 00:14:22.131 20:45:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.131 20:45:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.131 20:45:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.131 20:45:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.131 20:45:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.131 20:45:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.131 20:45:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.131 20:45:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.131 20:45:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.131 20:45:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.131 20:45:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:22.131 20:45:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:14:22.131 20:45:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.131 20:45:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.131 20:45:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:22.131 20:45:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.131 20:45:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:22.131 20:45:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.131 20:45:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.131 20:45:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.131 20:45:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.131 20:45:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.131 20:45:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.131 20:45:46 -- paths/export.sh@5 -- # export PATH 00:14:22.131 20:45:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.131 20:45:46 -- nvmf/common.sh@47 -- # : 0 00:14:22.131 20:45:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.131 20:45:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.131 20:45:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.131 20:45:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.131 20:45:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.131 20:45:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.131 20:45:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.131 20:45:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.131 20:45:46 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:22.131 20:45:46 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:22.131 20:45:46 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:22.131 20:45:46 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:22.131 20:45:46 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.131 20:45:46 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:22.131 20:45:46 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:22.131 20:45:46 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:22.131 20:45:46 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.131 20:45:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.131 20:45:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.131 20:45:46 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:14:22.131 20:45:46 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:22.131 20:45:46 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:22.131 20:45:46 -- common/autotest_common.sh@10 -- # set +x 00:14:28.724 20:45:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:28.724 20:45:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:28.724 20:45:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:28.724 20:45:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:28.724 20:45:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:28.724 20:45:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:28.724 20:45:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:28.724 20:45:53 -- nvmf/common.sh@295 -- # net_devs=() 00:14:28.724 20:45:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:28.724 20:45:53 -- nvmf/common.sh@296 -- # e810=() 00:14:28.724 20:45:53 -- nvmf/common.sh@296 -- # local -ga e810 00:14:28.724 20:45:53 -- nvmf/common.sh@297 -- # x722=() 00:14:28.724 20:45:53 -- nvmf/common.sh@297 -- # local -ga x722 00:14:28.724 20:45:53 -- nvmf/common.sh@298 -- # mlx=() 00:14:28.724 20:45:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:28.724 20:45:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:28.724 20:45:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:28.724 20:45:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:28.724 20:45:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:28.724 20:45:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:28.724 20:45:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:28.724 20:45:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:28.724 20:45:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:28.724 20:45:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:28.724 20:45:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:28.724 20:45:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:28.724 20:45:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:28.724 20:45:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:28.724 20:45:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:28.724 20:45:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.724 20:45:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:28.724 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:28.724 20:45:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:28.724 20:45:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:28.724 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:28.724 20:45:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:28.724 20:45:53 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.724 20:45:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.724 20:45:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:28.724 20:45:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.724 20:45:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:28.724 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:28.724 20:45:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.724 20:45:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:28.724 20:45:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:28.724 20:45:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:28.724 20:45:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:28.724 20:45:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:28.724 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:28.724 20:45:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:28.724 20:45:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:28.724 20:45:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:28.724 20:45:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:28.724 20:45:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:28.724 20:45:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:28.724 20:45:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:28.724 20:45:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:28.724 20:45:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:28.724 20:45:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:28.724 20:45:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:28.724 20:45:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:28.724 20:45:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:28.724 20:45:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:28.724 20:45:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:28.724 20:45:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:28.724 20:45:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:28.724 20:45:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:28.985 20:45:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:28.985 20:45:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:28.985 20:45:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:28.985 20:45:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:28.985 20:45:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:28.985 20:45:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:28.985 20:45:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:28.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:28.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.567 ms 00:14:28.985 00:14:28.985 --- 10.0.0.2 ping statistics --- 00:14:28.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.985 rtt min/avg/max/mdev = 0.567/0.567/0.567/0.000 ms 00:14:28.985 20:45:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:28.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:28.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:14:28.985 00:14:28.985 --- 10.0.0.1 ping statistics --- 00:14:28.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:28.985 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:14:28.985 20:45:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:28.985 20:45:53 -- nvmf/common.sh@411 -- # return 0 00:14:28.985 20:45:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:28.985 20:45:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:28.985 20:45:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:28.985 20:45:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:28.985 20:45:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:28.985 20:45:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:28.985 20:45:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:28.985 20:45:53 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:28.985 20:45:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:28.985 20:45:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:28.985 20:45:53 -- common/autotest_common.sh@10 -- # set +x 00:14:28.985 20:45:53 -- nvmf/common.sh@470 -- # nvmfpid=2727246 00:14:28.985 20:45:53 -- nvmf/common.sh@471 -- # waitforlisten 2727246 00:14:28.985 20:45:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:28.985 20:45:53 -- common/autotest_common.sh@817 -- # '[' -z 2727246 ']' 00:14:28.985 20:45:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.985 20:45:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:28.985 20:45:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.985 20:45:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:28.985 20:45:53 -- common/autotest_common.sh@10 -- # set +x 00:14:29.247 [2024-04-24 20:45:53.662025] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:14:29.247 [2024-04-24 20:45:53.662075] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.247 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.247 [2024-04-24 20:45:53.750304] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.247 [2024-04-24 20:45:53.873841] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.247 [2024-04-24 20:45:53.873921] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.247 [2024-04-24 20:45:53.873933] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.247 [2024-04-24 20:45:53.873943] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.247 [2024-04-24 20:45:53.873952] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.247 [2024-04-24 20:45:53.873998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.192 20:45:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:30.192 20:45:54 -- common/autotest_common.sh@850 -- # return 0 00:14:30.192 20:45:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:30.192 20:45:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:30.192 20:45:54 -- common/autotest_common.sh@10 -- # set +x 00:14:30.192 20:45:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.192 20:45:54 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:30.192 [2024-04-24 20:45:54.830969] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.453 20:45:54 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:30.453 20:45:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:14:30.453 20:45:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:30.453 20:45:54 -- common/autotest_common.sh@10 -- # set +x 00:14:30.453 ************************************ 00:14:30.453 START TEST lvs_grow_clean 00:14:30.453 ************************************ 00:14:30.453 20:45:54 -- common/autotest_common.sh@1111 -- # lvs_grow 00:14:30.453 20:45:54 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:30.453 20:45:54 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:30.453 20:45:54 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:30.453 20:45:54 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:30.453 20:45:55 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:30.453 20:45:55 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:30.453 20:45:55 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:30.453 20:45:55 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:30.453 20:45:55 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:30.722 20:45:55 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:30.722 20:45:55 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:30.982 20:45:55 -- target/nvmf_lvs_grow.sh@28 -- # lvs=fc7e9f2a-904d-423f-90d6-e4fce5bc9143 00:14:30.982 20:45:55 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc7e9f2a-904d-423f-90d6-e4fce5bc9143 00:14:30.982 20:45:55 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:31.244 20:45:55 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:31.244 20:45:55 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:31.244 20:45:55 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fc7e9f2a-904d-423f-90d6-e4fce5bc9143 lvol 150 00:14:31.244 20:45:55 -- target/nvmf_lvs_grow.sh@33 -- # lvol=cfc37082-1ff3-4d3a-99ea-2e36faf2defa 00:14:31.244 20:45:55 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:31.244 20:45:55 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:31.504 [2024-04-24 20:45:56.062492] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:31.504 [2024-04-24 20:45:56.062565] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:31.504 true 00:14:31.504 20:45:56 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc7e9f2a-904d-423f-90d6-e4fce5bc9143 00:14:31.504 20:45:56 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:31.765 20:45:56 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:31.765 20:45:56 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:32.025 20:45:56 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cfc37082-1ff3-4d3a-99ea-2e36faf2defa 00:14:32.286 20:45:56 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:32.286 [2024-04-24 20:45:56.913137] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.547 20:45:56 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:32.547 20:45:57 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2727961 00:14:32.547 20:45:57 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:32.547 20:45:57 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2727961 /var/tmp/bdevperf.sock 00:14:32.547 20:45:57 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:32.547 20:45:57 -- common/autotest_common.sh@817 -- # '[' -z 2727961 ']' 00:14:32.547 20:45:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.547 20:45:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:32.547 20:45:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.547 20:45:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:32.547 20:45:57 -- common/autotest_common.sh@10 -- # set +x 00:14:32.807 [2024-04-24 20:45:57.195824] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:14:32.807 [2024-04-24 20:45:57.195890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2727961 ] 00:14:32.807 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.807 [2024-04-24 20:45:57.257878] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.807 [2024-04-24 20:45:57.329121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.807 20:45:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:32.807 20:45:57 -- common/autotest_common.sh@850 -- # return 0 00:14:32.807 20:45:57 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:33.378 Nvme0n1 00:14:33.378 20:45:57 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:33.378 [ 00:14:33.378 { 00:14:33.378 "name": "Nvme0n1", 00:14:33.378 "aliases": [ 00:14:33.378 "cfc37082-1ff3-4d3a-99ea-2e36faf2defa" 00:14:33.378 ], 00:14:33.378 "product_name": "NVMe disk", 00:14:33.378 "block_size": 4096, 00:14:33.378 "num_blocks": 38912, 00:14:33.378 "uuid": "cfc37082-1ff3-4d3a-99ea-2e36faf2defa", 00:14:33.378 "assigned_rate_limits": { 00:14:33.378 "rw_ios_per_sec": 0, 00:14:33.378 "rw_mbytes_per_sec": 0, 00:14:33.378 "r_mbytes_per_sec": 0, 00:14:33.378 "w_mbytes_per_sec": 0 00:14:33.378 }, 00:14:33.378 "claimed": false, 00:14:33.378 "zoned": false, 00:14:33.378 "supported_io_types": { 00:14:33.378 "read": true, 00:14:33.378 "write": true, 00:14:33.378 "unmap": true, 00:14:33.378 "write_zeroes": true, 00:14:33.378 "flush": true, 00:14:33.378 "reset": true, 00:14:33.378 "compare": true, 00:14:33.378 "compare_and_write": true, 00:14:33.378 "abort": true, 00:14:33.378 "nvme_admin": true, 00:14:33.378 "nvme_io": true 00:14:33.378 }, 00:14:33.378 "memory_domains": [ 00:14:33.378 { 00:14:33.378 "dma_device_id": "system", 00:14:33.378 "dma_device_type": 1 00:14:33.378 } 00:14:33.378 ], 00:14:33.378 "driver_specific": { 00:14:33.378 "nvme": [ 00:14:33.378 { 00:14:33.378 "trid": { 00:14:33.378 "trtype": "TCP", 00:14:33.378 "adrfam": "IPv4", 00:14:33.378 "traddr": "10.0.0.2", 00:14:33.378 "trsvcid": "4420", 00:14:33.378 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:33.378 }, 00:14:33.378 "ctrlr_data": { 00:14:33.378 "cntlid": 1, 00:14:33.378 "vendor_id": "0x8086", 00:14:33.378 "model_number": "SPDK bdev Controller", 00:14:33.378 "serial_number": "SPDK0", 00:14:33.378 "firmware_revision": "24.05", 00:14:33.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:33.378 "oacs": { 00:14:33.378 "security": 0, 00:14:33.378 "format": 0, 00:14:33.378 "firmware": 0, 00:14:33.378 "ns_manage": 0 00:14:33.378 }, 00:14:33.378 "multi_ctrlr": true, 00:14:33.378 "ana_reporting": false 00:14:33.378 }, 00:14:33.378 "vs": { 00:14:33.378 "nvme_version": "1.3" 00:14:33.378 }, 00:14:33.378 "ns_data": { 00:14:33.378 "id": 1, 00:14:33.378 "can_share": true 00:14:33.378 } 00:14:33.378 } 00:14:33.378 ], 00:14:33.378 "mp_policy": "active_passive" 00:14:33.378 } 00:14:33.378 } 00:14:33.378 ] 00:14:33.378 20:45:57 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:33.378 20:45:57 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2728125 00:14:33.378 20:45:57 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:33.378 Running I/O for 10 seconds... 00:14:34.765 Latency(us) 00:14:34.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.765 Nvme0n1 : 1.00 17733.00 69.27 0.00 0.00 0.00 0.00 0.00 00:14:34.765 =================================================================================================================== 00:14:34.765 Total : 17733.00 69.27 0.00 0.00 0.00 0.00 0.00 00:14:34.765 00:14:35.336 20:45:59 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fc7e9f2a-904d-423f-90d6-e4fce5bc9143 00:14:35.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.595 Nvme0n1 : 2.00 17791.50 69.50 0.00 0.00 0.00 0.00 0.00 00:14:35.595 =================================================================================================================== 00:14:35.595 Total : 17791.50 69.50 0.00 0.00 0.00 0.00 0.00 00:14:35.595 00:14:35.595 true 00:14:35.595 20:46:00 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc7e9f2a-904d-423f-90d6-e4fce5bc9143 00:14:35.595 20:46:00 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:35.854 20:46:00 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:35.854 20:46:00 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:35.854 20:46:00 -- target/nvmf_lvs_grow.sh@65 -- # wait 2728125 00:14:36.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.423 Nvme0n1 : 3.00 17809.33 69.57 0.00 0.00 0.00 0.00 0.00 00:14:36.423 =================================================================================================================== 00:14:36.423 Total : 17809.33 69.57 0.00 0.00 0.00 0.00 0.00 00:14:36.423 00:14:37.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.805 Nvme0n1 : 4.00 17850.50 69.73 0.00 0.00 0.00 0.00 0.00 00:14:37.805 =================================================================================================================== 00:14:37.805 Total : 17850.50 69.73 0.00 0.00 0.00 0.00 0.00 00:14:37.805 00:14:38.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.745 Nvme0n1 : 5.00 17863.80 69.78 0.00 0.00 0.00 0.00 0.00 00:14:38.745 =================================================================================================================== 00:14:38.745 Total : 17863.80 69.78 0.00 0.00 0.00 0.00 0.00 00:14:38.745 00:14:39.686 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.686 Nvme0n1 : 6.00 17884.83 69.86 0.00 0.00 0.00 0.00 0.00 00:14:39.686 =================================================================================================================== 00:14:39.686 Total : 17884.83 69.86 0.00 0.00 0.00 0.00 0.00 00:14:39.686 00:14:40.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.629 Nvme0n1 : 7.00 17903.57 69.94 0.00 0.00 0.00 0.00 0.00 00:14:40.629 =================================================================================================================== 00:14:40.629 Total : 17903.57 69.94 0.00 0.00 0.00 0.00 0.00 00:14:40.629 00:14:41.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.570 Nvme0n1 : 8.00 17912.38 69.97 0.00 0.00 0.00 0.00 0.00 00:14:41.570 =================================================================================================================== 00:14:41.570 Total : 17912.38 69.97 0.00 0.00 0.00 0.00 0.00 00:14:41.570 00:14:42.512 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.512 Nvme0n1 : 9.00 17927.22 70.03 0.00 0.00 0.00 0.00 0.00 00:14:42.512 =================================================================================================================== 00:14:42.512 Total : 17927.22 70.03 0.00 0.00 0.00 0.00 0.00 00:14:42.512 00:14:43.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.455 Nvme0n1 : 10.00 17932.10 70.05 0.00 0.00 0.00 0.00 0.00 00:14:43.455 =================================================================================================================== 00:14:43.455 Total : 17932.10 70.05 0.00 0.00 0.00 0.00 0.00 00:14:43.455 00:14:43.455 00:14:43.455 Latency(us) 00:14:43.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.455 Nvme0n1 : 10.01 17933.58 70.05 0.00 0.00 7133.91 2020.69 12451.84 00:14:43.455 =================================================================================================================== 00:14:43.455 Total : 17933.58 70.05 0.00 0.00 7133.91 2020.69 12451.84 00:14:43.455 0 00:14:43.455 20:46:08 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2727961 00:14:43.455 20:46:08 -- common/autotest_common.sh@936 -- # '[' -z 2727961 ']' 00:14:43.455 20:46:08 -- common/autotest_common.sh@940 -- # kill -0 2727961 00:14:43.455 20:46:08 -- common/autotest_common.sh@941 -- # uname 00:14:43.455 20:46:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:43.455 20:46:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2727961 00:14:43.715 20:46:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:43.715 20:46:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:43.715 20:46:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2727961' 00:14:43.715 killing process with pid 2727961 00:14:43.715 20:46:08 -- common/autotest_common.sh@955 -- # kill 2727961 00:14:43.715 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.715 00:14:43.715 Latency(us) 00:14:43.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.715 =================================================================================================================== 00:14:43.715 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:43.715 20:46:08 -- common/autotest_common.sh@960 -- # wait 2727961 00:14:43.715 20:46:08 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:43.991 20:46:08 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc7e9f2a-904d-423f-90d6-e4fce5bc9143 00:14:43.991 20:46:08 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:44.271 20:46:08 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:44.271 20:46:08 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:44.271 20:46:08 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:44.271 [2024-04-24 20:46:08.866469] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:44.271 20:46:08 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc7e9f2a-904d-423f-90d6-e4fce5bc9143 00:14:44.271 20:46:08 -- common/autotest_common.sh@638 -- # local es=0 00:14:44.271 20:46:08 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc7e9f2a-904d-423f-90d6-e4fce5bc9143 00:14:44.271 20:46:08 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.271 20:46:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:44.271 20:46:08 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.271 20:46:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:44.271 20:46:08 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.271 20:46:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:44.271 20:46:08 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:44.271 20:46:08 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:44.271 20:46:08 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc7e9f2a-904d-423f-90d6-e4fce5bc9143 00:14:44.532 request: 00:14:44.532 { 00:14:44.532 "uuid": "fc7e9f2a-904d-423f-90d6-e4fce5bc9143", 00:14:44.532 "method": "bdev_lvol_get_lvstores", 00:14:44.532 "req_id": 1 00:14:44.532 } 00:14:44.532 Got JSON-RPC error response 00:14:44.532 response: 00:14:44.532 { 00:14:44.532 "code": -19, 00:14:44.532 "message": "No such device" 00:14:44.532 } 00:14:44.532 20:46:09 -- common/autotest_common.sh@641 -- # es=1 00:14:44.532 20:46:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:44.532 20:46:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:44.532 20:46:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:44.532 20:46:09 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:44.792 aio_bdev 00:14:44.792 20:46:09 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev cfc37082-1ff3-4d3a-99ea-2e36faf2defa 00:14:44.792 20:46:09 -- common/autotest_common.sh@885 -- # local bdev_name=cfc37082-1ff3-4d3a-99ea-2e36faf2defa 00:14:44.792 20:46:09 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:14:44.792 20:46:09 -- common/autotest_common.sh@887 -- # local i 00:14:44.792 20:46:09 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:14:44.792 20:46:09 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:14:44.792 20:46:09 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:45.052 20:46:09 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b cfc37082-1ff3-4d3a-99ea-2e36faf2defa -t 2000 00:14:45.313 [ 00:14:45.313 { 00:14:45.313 "name": "cfc37082-1ff3-4d3a-99ea-2e36faf2defa", 00:14:45.313 "aliases": [ 00:14:45.313 "lvs/lvol" 00:14:45.313 ], 00:14:45.313 "product_name": "Logical Volume", 00:14:45.313 "block_size": 4096, 00:14:45.313 "num_blocks": 38912, 00:14:45.313 "uuid": "cfc37082-1ff3-4d3a-99ea-2e36faf2defa", 00:14:45.313 "assigned_rate_limits": { 00:14:45.313 "rw_ios_per_sec": 0, 00:14:45.313 "rw_mbytes_per_sec": 0, 00:14:45.313 "r_mbytes_per_sec": 0, 00:14:45.313 "w_mbytes_per_sec": 0 00:14:45.313 }, 00:14:45.313 "claimed": false, 00:14:45.313 "zoned": false, 00:14:45.313 "supported_io_types": { 00:14:45.313 "read": true, 00:14:45.313 "write": true, 00:14:45.313 "unmap": true, 00:14:45.313 "write_zeroes": true, 00:14:45.313 "flush": false, 00:14:45.313 "reset": true, 00:14:45.313 "compare": false, 00:14:45.313 "compare_and_write": false, 00:14:45.313 "abort": false, 00:14:45.313 "nvme_admin": false, 00:14:45.313 "nvme_io": false 00:14:45.313 }, 00:14:45.313 "driver_specific": { 00:14:45.313 "lvol": { 00:14:45.313 "lvol_store_uuid": "fc7e9f2a-904d-423f-90d6-e4fce5bc9143", 00:14:45.313 "base_bdev": "aio_bdev", 00:14:45.313 "thin_provision": false, 00:14:45.313 "snapshot": false, 00:14:45.313 "clone": false, 00:14:45.313 "esnap_clone": false 00:14:45.313 } 00:14:45.313 } 00:14:45.313 } 00:14:45.313 ] 00:14:45.313 20:46:09 -- common/autotest_common.sh@893 -- # return 0 00:14:45.313 20:46:09 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc7e9f2a-904d-423f-90d6-e4fce5bc9143 00:14:45.313 20:46:09 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:45.313 20:46:09 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:45.313 20:46:09 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fc7e9f2a-904d-423f-90d6-e4fce5bc9143 00:14:45.313 20:46:09 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:45.574 20:46:10 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:45.574 20:46:10 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cfc37082-1ff3-4d3a-99ea-2e36faf2defa 00:14:45.835 20:46:10 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fc7e9f2a-904d-423f-90d6-e4fce5bc9143 00:14:46.096 20:46:10 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:46.096 20:46:10 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:46.356 00:14:46.356 real 0m15.747s 00:14:46.356 user 0m15.507s 00:14:46.356 sys 0m1.334s 00:14:46.356 20:46:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:46.356 20:46:10 -- common/autotest_common.sh@10 -- # set +x 00:14:46.356 ************************************ 00:14:46.356 END TEST lvs_grow_clean 00:14:46.356 ************************************ 00:14:46.356 20:46:10 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:46.356 20:46:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:46.356 20:46:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:46.356 20:46:10 -- common/autotest_common.sh@10 -- # set +x 00:14:46.356 ************************************ 00:14:46.356 START TEST lvs_grow_dirty 00:14:46.356 ************************************ 00:14:46.356 20:46:10 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:14:46.356 20:46:10 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:46.356 20:46:10 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:46.356 20:46:10 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:46.356 20:46:10 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:46.356 20:46:10 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:46.356 20:46:10 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:46.356 20:46:10 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:46.356 20:46:10 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:46.356 20:46:10 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:46.616 20:46:11 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:46.616 20:46:11 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:46.877 20:46:11 -- target/nvmf_lvs_grow.sh@28 -- # lvs=40741454-a22d-419c-a7ef-9ce9a78583cb 00:14:46.877 20:46:11 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:14:46.877 20:46:11 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:47.138 20:46:11 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:47.138 20:46:11 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:47.138 20:46:11 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 40741454-a22d-419c-a7ef-9ce9a78583cb lvol 150 00:14:47.138 20:46:11 -- target/nvmf_lvs_grow.sh@33 -- # lvol=50609b56-4f7f-40ad-bf3e-b09454df3af3 00:14:47.138 20:46:11 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:47.138 20:46:11 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:47.398 [2024-04-24 20:46:11.953643] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:47.398 [2024-04-24 20:46:11.953705] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:47.398 true 00:14:47.398 20:46:11 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:14:47.398 20:46:11 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:47.660 20:46:12 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:47.660 20:46:12 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:47.920 20:46:12 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 50609b56-4f7f-40ad-bf3e-b09454df3af3 00:14:47.920 20:46:12 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:48.182 20:46:12 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:48.443 20:46:12 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:48.443 20:46:12 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2731054 00:14:48.443 20:46:12 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:48.443 20:46:12 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2731054 /var/tmp/bdevperf.sock 00:14:48.443 20:46:12 -- common/autotest_common.sh@817 -- # '[' -z 2731054 ']' 00:14:48.443 20:46:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.443 20:46:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:48.443 20:46:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.443 20:46:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:48.443 20:46:12 -- common/autotest_common.sh@10 -- # set +x 00:14:48.443 [2024-04-24 20:46:12.983565] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:14:48.443 [2024-04-24 20:46:12.983615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2731054 ] 00:14:48.443 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.443 [2024-04-24 20:46:13.044102] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.703 [2024-04-24 20:46:13.106612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.703 20:46:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:48.703 20:46:13 -- common/autotest_common.sh@850 -- # return 0 00:14:48.703 20:46:13 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:48.964 Nvme0n1 00:14:48.964 20:46:13 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:49.224 [ 00:14:49.224 { 00:14:49.224 "name": "Nvme0n1", 00:14:49.224 "aliases": [ 00:14:49.224 "50609b56-4f7f-40ad-bf3e-b09454df3af3" 00:14:49.224 ], 00:14:49.224 "product_name": "NVMe disk", 00:14:49.224 "block_size": 4096, 00:14:49.224 "num_blocks": 38912, 00:14:49.224 "uuid": "50609b56-4f7f-40ad-bf3e-b09454df3af3", 00:14:49.224 "assigned_rate_limits": { 00:14:49.224 "rw_ios_per_sec": 0, 00:14:49.224 "rw_mbytes_per_sec": 0, 00:14:49.224 "r_mbytes_per_sec": 0, 00:14:49.224 "w_mbytes_per_sec": 0 00:14:49.224 }, 00:14:49.224 "claimed": false, 00:14:49.224 "zoned": false, 00:14:49.224 "supported_io_types": { 00:14:49.224 "read": true, 00:14:49.224 "write": true, 00:14:49.224 "unmap": true, 00:14:49.224 "write_zeroes": true, 00:14:49.224 "flush": true, 00:14:49.224 "reset": true, 00:14:49.224 "compare": true, 00:14:49.224 "compare_and_write": true, 00:14:49.224 "abort": true, 00:14:49.224 "nvme_admin": true, 00:14:49.224 "nvme_io": true 00:14:49.224 }, 00:14:49.224 "memory_domains": [ 00:14:49.224 { 00:14:49.224 "dma_device_id": "system", 00:14:49.225 "dma_device_type": 1 00:14:49.225 } 00:14:49.225 ], 00:14:49.225 "driver_specific": { 00:14:49.225 "nvme": [ 00:14:49.225 { 00:14:49.225 "trid": { 00:14:49.225 "trtype": "TCP", 00:14:49.225 "adrfam": "IPv4", 00:14:49.225 "traddr": "10.0.0.2", 00:14:49.225 "trsvcid": "4420", 00:14:49.225 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:49.225 }, 00:14:49.225 "ctrlr_data": { 00:14:49.225 "cntlid": 1, 00:14:49.225 "vendor_id": "0x8086", 00:14:49.225 "model_number": "SPDK bdev Controller", 00:14:49.225 "serial_number": "SPDK0", 00:14:49.225 "firmware_revision": "24.05", 00:14:49.225 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:49.225 "oacs": { 00:14:49.225 "security": 0, 00:14:49.225 "format": 0, 00:14:49.225 "firmware": 0, 00:14:49.225 "ns_manage": 0 00:14:49.225 }, 00:14:49.225 "multi_ctrlr": true, 00:14:49.225 "ana_reporting": false 00:14:49.225 }, 00:14:49.225 "vs": { 00:14:49.225 "nvme_version": "1.3" 00:14:49.225 }, 00:14:49.225 "ns_data": { 00:14:49.225 "id": 1, 00:14:49.225 "can_share": true 00:14:49.225 } 00:14:49.225 } 00:14:49.225 ], 00:14:49.225 "mp_policy": "active_passive" 00:14:49.225 } 00:14:49.225 } 00:14:49.225 ] 00:14:49.225 20:46:13 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2731360 00:14:49.225 20:46:13 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:49.225 20:46:13 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:49.225 Running I/O for 10 seconds... 00:14:50.611 Latency(us) 00:14:50.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.611 Nvme0n1 : 1.00 17594.00 68.73 0.00 0.00 0.00 0.00 0.00 00:14:50.611 =================================================================================================================== 00:14:50.611 Total : 17594.00 68.73 0.00 0.00 0.00 0.00 0.00 00:14:50.611 00:14:51.181 20:46:15 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:14:51.442 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.442 Nvme0n1 : 2.00 17722.00 69.23 0.00 0.00 0.00 0.00 0.00 00:14:51.442 =================================================================================================================== 00:14:51.442 Total : 17722.00 69.23 0.00 0.00 0.00 0.00 0.00 00:14:51.442 00:14:51.442 true 00:14:51.442 20:46:15 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:14:51.442 20:46:15 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:51.703 20:46:16 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:51.703 20:46:16 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:51.703 20:46:16 -- target/nvmf_lvs_grow.sh@65 -- # wait 2731360 00:14:52.273 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.273 Nvme0n1 : 3.00 17806.33 69.56 0.00 0.00 0.00 0.00 0.00 00:14:52.273 =================================================================================================================== 00:14:52.273 Total : 17806.33 69.56 0.00 0.00 0.00 0.00 0.00 00:14:52.273 00:14:53.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.214 Nvme0n1 : 4.00 17833.25 69.66 0.00 0.00 0.00 0.00 0.00 00:14:53.214 =================================================================================================================== 00:14:53.214 Total : 17833.25 69.66 0.00 0.00 0.00 0.00 0.00 00:14:53.214 00:14:54.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:54.602 Nvme0n1 : 5.00 17861.80 69.77 0.00 0.00 0.00 0.00 0.00 00:14:54.602 =================================================================================================================== 00:14:54.602 Total : 17861.80 69.77 0.00 0.00 0.00 0.00 0.00 00:14:54.602 00:14:55.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:55.544 Nvme0n1 : 6.00 17889.83 69.88 0.00 0.00 0.00 0.00 0.00 00:14:55.544 =================================================================================================================== 00:14:55.544 Total : 17889.83 69.88 0.00 0.00 0.00 0.00 0.00 00:14:55.544 00:14:56.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:56.484 Nvme0n1 : 7.00 17901.86 69.93 0.00 0.00 0.00 0.00 0.00 00:14:56.484 =================================================================================================================== 00:14:56.484 Total : 17901.86 69.93 0.00 0.00 0.00 0.00 0.00 00:14:56.484 00:14:57.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:57.437 Nvme0n1 : 8.00 17911.12 69.97 0.00 0.00 0.00 0.00 0.00 00:14:57.437 =================================================================================================================== 00:14:57.437 Total : 17911.12 69.97 0.00 0.00 0.00 0.00 0.00 00:14:57.437 00:14:58.380 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:58.380 Nvme0n1 : 9.00 17918.89 70.00 0.00 0.00 0.00 0.00 0.00 00:14:58.380 =================================================================================================================== 00:14:58.380 Total : 17918.89 70.00 0.00 0.00 0.00 0.00 0.00 00:14:58.380 00:14:59.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.325 Nvme0n1 : 10.00 17930.60 70.04 0.00 0.00 0.00 0.00 0.00 00:14:59.325 =================================================================================================================== 00:14:59.325 Total : 17930.60 70.04 0.00 0.00 0.00 0.00 0.00 00:14:59.325 00:14:59.325 00:14:59.325 Latency(us) 00:14:59.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.325 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.325 Nvme0n1 : 10.01 17930.50 70.04 0.00 0.00 7134.44 4314.45 16820.91 00:14:59.325 =================================================================================================================== 00:14:59.325 Total : 17930.50 70.04 0.00 0.00 7134.44 4314.45 16820.91 00:14:59.325 0 00:14:59.325 20:46:23 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2731054 00:14:59.325 20:46:23 -- common/autotest_common.sh@936 -- # '[' -z 2731054 ']' 00:14:59.325 20:46:23 -- common/autotest_common.sh@940 -- # kill -0 2731054 00:14:59.325 20:46:23 -- common/autotest_common.sh@941 -- # uname 00:14:59.325 20:46:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:59.325 20:46:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2731054 00:14:59.325 20:46:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:59.325 20:46:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:59.325 20:46:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2731054' 00:14:59.325 killing process with pid 2731054 00:14:59.325 20:46:23 -- common/autotest_common.sh@955 -- # kill 2731054 00:14:59.325 Received shutdown signal, test time was about 10.000000 seconds 00:14:59.325 00:14:59.325 Latency(us) 00:14:59.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.325 =================================================================================================================== 00:14:59.325 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:59.325 20:46:23 -- common/autotest_common.sh@960 -- # wait 2731054 00:14:59.587 20:46:24 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:59.848 20:46:24 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:14:59.848 20:46:24 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:59.848 20:46:24 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:59.848 20:46:24 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:59.848 20:46:24 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2727246 00:14:59.848 20:46:24 -- target/nvmf_lvs_grow.sh@74 -- # wait 2727246 00:15:00.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2727246 Killed "${NVMF_APP[@]}" "$@" 00:15:00.109 20:46:24 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:00.109 20:46:24 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:00.109 20:46:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:00.109 20:46:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:00.109 20:46:24 -- common/autotest_common.sh@10 -- # set +x 00:15:00.109 20:46:24 -- nvmf/common.sh@470 -- # nvmfpid=2733401 00:15:00.109 20:46:24 -- nvmf/common.sh@471 -- # waitforlisten 2733401 00:15:00.109 20:46:24 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:00.109 20:46:24 -- common/autotest_common.sh@817 -- # '[' -z 2733401 ']' 00:15:00.109 20:46:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.109 20:46:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:00.109 20:46:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.109 20:46:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:00.109 20:46:24 -- common/autotest_common.sh@10 -- # set +x 00:15:00.109 [2024-04-24 20:46:24.574529] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:15:00.109 [2024-04-24 20:46:24.574583] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.109 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.109 [2024-04-24 20:46:24.658293] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.109 [2024-04-24 20:46:24.721883] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.109 [2024-04-24 20:46:24.721918] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.109 [2024-04-24 20:46:24.721925] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.109 [2024-04-24 20:46:24.721932] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.109 [2024-04-24 20:46:24.721937] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.109 [2024-04-24 20:46:24.721957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.050 20:46:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:01.050 20:46:25 -- common/autotest_common.sh@850 -- # return 0 00:15:01.050 20:46:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:01.050 20:46:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:01.050 20:46:25 -- common/autotest_common.sh@10 -- # set +x 00:15:01.050 20:46:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.050 20:46:25 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:01.050 [2024-04-24 20:46:25.659049] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:01.050 [2024-04-24 20:46:25.659140] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:01.050 [2024-04-24 20:46:25.659170] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:01.050 20:46:25 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:01.050 20:46:25 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 50609b56-4f7f-40ad-bf3e-b09454df3af3 00:15:01.050 20:46:25 -- common/autotest_common.sh@885 -- # local bdev_name=50609b56-4f7f-40ad-bf3e-b09454df3af3 00:15:01.050 20:46:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:01.050 20:46:25 -- common/autotest_common.sh@887 -- # local i 00:15:01.050 20:46:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:01.050 20:46:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:01.050 20:46:25 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:01.311 20:46:25 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 50609b56-4f7f-40ad-bf3e-b09454df3af3 -t 2000 00:15:01.571 [ 00:15:01.571 { 00:15:01.571 "name": "50609b56-4f7f-40ad-bf3e-b09454df3af3", 00:15:01.571 "aliases": [ 00:15:01.571 "lvs/lvol" 00:15:01.571 ], 00:15:01.571 "product_name": "Logical Volume", 00:15:01.571 "block_size": 4096, 00:15:01.571 "num_blocks": 38912, 00:15:01.571 "uuid": "50609b56-4f7f-40ad-bf3e-b09454df3af3", 00:15:01.571 "assigned_rate_limits": { 00:15:01.571 "rw_ios_per_sec": 0, 00:15:01.571 "rw_mbytes_per_sec": 0, 00:15:01.571 "r_mbytes_per_sec": 0, 00:15:01.571 "w_mbytes_per_sec": 0 00:15:01.571 }, 00:15:01.571 "claimed": false, 00:15:01.571 "zoned": false, 00:15:01.571 "supported_io_types": { 00:15:01.571 "read": true, 00:15:01.571 "write": true, 00:15:01.571 "unmap": true, 00:15:01.571 "write_zeroes": true, 00:15:01.571 "flush": false, 00:15:01.571 "reset": true, 00:15:01.571 "compare": false, 00:15:01.571 "compare_and_write": false, 00:15:01.571 "abort": false, 00:15:01.571 "nvme_admin": false, 00:15:01.571 "nvme_io": false 00:15:01.571 }, 00:15:01.571 "driver_specific": { 00:15:01.571 "lvol": { 00:15:01.571 "lvol_store_uuid": "40741454-a22d-419c-a7ef-9ce9a78583cb", 00:15:01.571 "base_bdev": "aio_bdev", 00:15:01.571 "thin_provision": false, 00:15:01.571 "snapshot": false, 00:15:01.571 "clone": false, 00:15:01.571 "esnap_clone": false 00:15:01.571 } 00:15:01.571 } 00:15:01.571 } 00:15:01.571 ] 00:15:01.571 20:46:26 -- common/autotest_common.sh@893 -- # return 0 00:15:01.571 20:46:26 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:15:01.571 20:46:26 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:01.831 20:46:26 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:01.831 20:46:26 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:15:01.831 20:46:26 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:02.091 20:46:26 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:02.092 20:46:26 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:02.092 [2024-04-24 20:46:26.691652] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:02.092 20:46:26 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:15:02.092 20:46:26 -- common/autotest_common.sh@638 -- # local es=0 00:15:02.092 20:46:26 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:15:02.092 20:46:26 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.092 20:46:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.092 20:46:26 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.092 20:46:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.092 20:46:26 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.092 20:46:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.092 20:46:26 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.092 20:46:26 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:02.092 20:46:26 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:15:02.352 request: 00:15:02.352 { 00:15:02.352 "uuid": "40741454-a22d-419c-a7ef-9ce9a78583cb", 00:15:02.352 "method": "bdev_lvol_get_lvstores", 00:15:02.352 "req_id": 1 00:15:02.352 } 00:15:02.352 Got JSON-RPC error response 00:15:02.352 response: 00:15:02.352 { 00:15:02.352 "code": -19, 00:15:02.352 "message": "No such device" 00:15:02.352 } 00:15:02.352 20:46:26 -- common/autotest_common.sh@641 -- # es=1 00:15:02.352 20:46:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:02.352 20:46:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:02.352 20:46:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:02.352 20:46:26 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:02.613 aio_bdev 00:15:02.613 20:46:27 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 50609b56-4f7f-40ad-bf3e-b09454df3af3 00:15:02.613 20:46:27 -- common/autotest_common.sh@885 -- # local bdev_name=50609b56-4f7f-40ad-bf3e-b09454df3af3 00:15:02.613 20:46:27 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:15:02.613 20:46:27 -- common/autotest_common.sh@887 -- # local i 00:15:02.613 20:46:27 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:15:02.613 20:46:27 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:15:02.613 20:46:27 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:02.874 20:46:27 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 50609b56-4f7f-40ad-bf3e-b09454df3af3 -t 2000 00:15:03.134 [ 00:15:03.134 { 00:15:03.134 "name": "50609b56-4f7f-40ad-bf3e-b09454df3af3", 00:15:03.134 "aliases": [ 00:15:03.134 "lvs/lvol" 00:15:03.134 ], 00:15:03.134 "product_name": "Logical Volume", 00:15:03.134 "block_size": 4096, 00:15:03.134 "num_blocks": 38912, 00:15:03.134 "uuid": "50609b56-4f7f-40ad-bf3e-b09454df3af3", 00:15:03.134 "assigned_rate_limits": { 00:15:03.134 "rw_ios_per_sec": 0, 00:15:03.134 "rw_mbytes_per_sec": 0, 00:15:03.134 "r_mbytes_per_sec": 0, 00:15:03.134 "w_mbytes_per_sec": 0 00:15:03.134 }, 00:15:03.134 "claimed": false, 00:15:03.134 "zoned": false, 00:15:03.134 "supported_io_types": { 00:15:03.134 "read": true, 00:15:03.134 "write": true, 00:15:03.134 "unmap": true, 00:15:03.134 "write_zeroes": true, 00:15:03.134 "flush": false, 00:15:03.134 "reset": true, 00:15:03.134 "compare": false, 00:15:03.134 "compare_and_write": false, 00:15:03.134 "abort": false, 00:15:03.134 "nvme_admin": false, 00:15:03.134 "nvme_io": false 00:15:03.134 }, 00:15:03.134 "driver_specific": { 00:15:03.134 "lvol": { 00:15:03.134 "lvol_store_uuid": "40741454-a22d-419c-a7ef-9ce9a78583cb", 00:15:03.134 "base_bdev": "aio_bdev", 00:15:03.134 "thin_provision": false, 00:15:03.134 "snapshot": false, 00:15:03.134 "clone": false, 00:15:03.134 "esnap_clone": false 00:15:03.134 } 00:15:03.134 } 00:15:03.134 } 00:15:03.134 ] 00:15:03.134 20:46:27 -- common/autotest_common.sh@893 -- # return 0 00:15:03.134 20:46:27 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:15:03.134 20:46:27 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:03.134 20:46:27 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:03.134 20:46:27 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:15:03.134 20:46:27 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:03.395 20:46:27 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:03.395 20:46:27 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 50609b56-4f7f-40ad-bf3e-b09454df3af3 00:15:03.655 20:46:28 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 40741454-a22d-419c-a7ef-9ce9a78583cb 00:15:03.916 20:46:28 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:03.916 20:46:28 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:04.177 00:15:04.177 real 0m17.638s 00:15:04.177 user 0m46.089s 00:15:04.177 sys 0m2.854s 00:15:04.177 20:46:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:04.177 20:46:28 -- common/autotest_common.sh@10 -- # set +x 00:15:04.177 ************************************ 00:15:04.177 END TEST lvs_grow_dirty 00:15:04.177 ************************************ 00:15:04.177 20:46:28 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:04.177 20:46:28 -- common/autotest_common.sh@794 -- # type=--id 00:15:04.177 20:46:28 -- common/autotest_common.sh@795 -- # id=0 00:15:04.177 20:46:28 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:15:04.177 20:46:28 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:04.177 20:46:28 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:15:04.177 20:46:28 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:15:04.177 20:46:28 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:15:04.177 20:46:28 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:04.177 nvmf_trace.0 00:15:04.177 20:46:28 -- common/autotest_common.sh@809 -- # return 0 00:15:04.177 20:46:28 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:04.177 20:46:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:04.177 20:46:28 -- nvmf/common.sh@117 -- # sync 00:15:04.177 20:46:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:04.177 20:46:28 -- nvmf/common.sh@120 -- # set +e 00:15:04.177 20:46:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:04.177 20:46:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:04.177 rmmod nvme_tcp 00:15:04.177 rmmod nvme_fabrics 00:15:04.177 rmmod nvme_keyring 00:15:04.177 20:46:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:04.177 20:46:28 -- nvmf/common.sh@124 -- # set -e 00:15:04.177 20:46:28 -- nvmf/common.sh@125 -- # return 0 00:15:04.177 20:46:28 -- nvmf/common.sh@478 -- # '[' -n 2733401 ']' 00:15:04.177 20:46:28 -- nvmf/common.sh@479 -- # killprocess 2733401 00:15:04.177 20:46:28 -- common/autotest_common.sh@936 -- # '[' -z 2733401 ']' 00:15:04.177 20:46:28 -- common/autotest_common.sh@940 -- # kill -0 2733401 00:15:04.177 20:46:28 -- common/autotest_common.sh@941 -- # uname 00:15:04.177 20:46:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:04.177 20:46:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2733401 00:15:04.177 20:46:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:04.177 20:46:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:04.177 20:46:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2733401' 00:15:04.177 killing process with pid 2733401 00:15:04.177 20:46:28 -- common/autotest_common.sh@955 -- # kill 2733401 00:15:04.177 20:46:28 -- common/autotest_common.sh@960 -- # wait 2733401 00:15:04.438 20:46:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:04.438 20:46:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:04.438 20:46:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:04.438 20:46:28 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:04.438 20:46:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:04.438 20:46:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.438 20:46:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.438 20:46:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.352 20:46:30 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:06.613 00:15:06.613 real 0m44.393s 00:15:06.613 user 1m8.394s 00:15:06.613 sys 0m9.955s 00:15:06.613 20:46:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:06.613 20:46:30 -- common/autotest_common.sh@10 -- # set +x 00:15:06.613 ************************************ 00:15:06.613 END TEST nvmf_lvs_grow 00:15:06.613 ************************************ 00:15:06.613 20:46:31 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:06.613 20:46:31 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:06.613 20:46:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.613 20:46:31 -- common/autotest_common.sh@10 -- # set +x 00:15:06.613 ************************************ 00:15:06.613 START TEST nvmf_bdev_io_wait 00:15:06.613 ************************************ 00:15:06.613 20:46:31 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:06.875 * Looking for test storage... 00:15:06.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.875 20:46:31 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.875 20:46:31 -- nvmf/common.sh@7 -- # uname -s 00:15:06.875 20:46:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.875 20:46:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.875 20:46:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.875 20:46:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.875 20:46:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.875 20:46:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.875 20:46:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.875 20:46:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.875 20:46:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.875 20:46:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.875 20:46:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:06.875 20:46:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:06.875 20:46:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.875 20:46:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.875 20:46:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.875 20:46:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.875 20:46:31 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.875 20:46:31 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.875 20:46:31 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.875 20:46:31 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.875 20:46:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.875 20:46:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.875 20:46:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.875 20:46:31 -- paths/export.sh@5 -- # export PATH 00:15:06.875 20:46:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.875 20:46:31 -- nvmf/common.sh@47 -- # : 0 00:15:06.875 20:46:31 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.875 20:46:31 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.875 20:46:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.875 20:46:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.875 20:46:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.875 20:46:31 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.875 20:46:31 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.875 20:46:31 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.875 20:46:31 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.875 20:46:31 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.875 20:46:31 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:06.875 20:46:31 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:06.875 20:46:31 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.875 20:46:31 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:06.875 20:46:31 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:06.875 20:46:31 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:06.875 20:46:31 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.875 20:46:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.875 20:46:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.875 20:46:31 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:06.875 20:46:31 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:06.875 20:46:31 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:06.875 20:46:31 -- common/autotest_common.sh@10 -- # set +x 00:15:13.468 20:46:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:13.468 20:46:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:13.468 20:46:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:13.468 20:46:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:13.468 20:46:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:13.468 20:46:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:13.468 20:46:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:13.468 20:46:37 -- nvmf/common.sh@295 -- # net_devs=() 00:15:13.468 20:46:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:13.468 20:46:37 -- nvmf/common.sh@296 -- # e810=() 00:15:13.468 20:46:37 -- nvmf/common.sh@296 -- # local -ga e810 00:15:13.468 20:46:37 -- nvmf/common.sh@297 -- # x722=() 00:15:13.468 20:46:37 -- nvmf/common.sh@297 -- # local -ga x722 00:15:13.468 20:46:37 -- nvmf/common.sh@298 -- # mlx=() 00:15:13.468 20:46:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:13.468 20:46:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:13.468 20:46:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:13.468 20:46:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:13.468 20:46:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:13.468 20:46:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:13.468 20:46:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:13.468 20:46:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:13.468 20:46:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:13.468 20:46:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:13.468 20:46:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:13.468 20:46:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:13.468 20:46:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:13.468 20:46:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:13.468 20:46:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:13.468 20:46:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:13.468 20:46:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:13.468 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:13.468 20:46:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:13.468 20:46:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:13.468 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:13.468 20:46:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:13.468 20:46:37 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:13.468 20:46:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.468 20:46:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:13.468 20:46:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.468 20:46:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:13.468 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:13.468 20:46:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.468 20:46:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:13.468 20:46:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.468 20:46:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:13.468 20:46:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.468 20:46:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:13.468 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:13.468 20:46:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.468 20:46:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:13.468 20:46:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:13.468 20:46:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:13.468 20:46:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:13.468 20:46:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.468 20:46:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.468 20:46:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:13.469 20:46:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:13.469 20:46:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:13.469 20:46:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:13.469 20:46:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:13.469 20:46:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:13.469 20:46:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.469 20:46:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:13.469 20:46:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:13.469 20:46:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:13.469 20:46:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:13.469 20:46:37 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:13.469 20:46:37 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:13.469 20:46:37 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:13.469 20:46:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:13.469 20:46:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:13.469 20:46:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:13.469 20:46:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:13.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:15:13.469 00:15:13.469 --- 10.0.0.2 ping statistics --- 00:15:13.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.469 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:15:13.469 20:46:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:13.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:15:13.469 00:15:13.469 --- 10.0.0.1 ping statistics --- 00:15:13.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.469 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:15:13.469 20:46:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.469 20:46:38 -- nvmf/common.sh@411 -- # return 0 00:15:13.469 20:46:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:13.469 20:46:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.469 20:46:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:13.469 20:46:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:13.469 20:46:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.469 20:46:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:13.469 20:46:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:13.469 20:46:38 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:13.469 20:46:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:13.469 20:46:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:13.469 20:46:38 -- common/autotest_common.sh@10 -- # set +x 00:15:13.735 20:46:38 -- nvmf/common.sh@470 -- # nvmfpid=2738412 00:15:13.735 20:46:38 -- nvmf/common.sh@471 -- # waitforlisten 2738412 00:15:13.735 20:46:38 -- common/autotest_common.sh@817 -- # '[' -z 2738412 ']' 00:15:13.735 20:46:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.735 20:46:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:13.735 20:46:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.735 20:46:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:13.735 20:46:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:13.735 20:46:38 -- common/autotest_common.sh@10 -- # set +x 00:15:13.735 [2024-04-24 20:46:38.170505] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:15:13.735 [2024-04-24 20:46:38.170572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.735 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.735 [2024-04-24 20:46:38.257216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:13.735 [2024-04-24 20:46:38.352111] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.735 [2024-04-24 20:46:38.352173] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.735 [2024-04-24 20:46:38.352182] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.735 [2024-04-24 20:46:38.352188] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.735 [2024-04-24 20:46:38.352195] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.735 [2024-04-24 20:46:38.352326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.735 [2024-04-24 20:46:38.352453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.735 [2024-04-24 20:46:38.352619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.735 [2024-04-24 20:46:38.352619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.678 20:46:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:14.678 20:46:39 -- common/autotest_common.sh@850 -- # return 0 00:15:14.678 20:46:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:14.678 20:46:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:14.678 20:46:39 -- common/autotest_common.sh@10 -- # set +x 00:15:14.678 20:46:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.678 20:46:39 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:14.678 20:46:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.678 20:46:39 -- common/autotest_common.sh@10 -- # set +x 00:15:14.678 20:46:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.678 20:46:39 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:14.678 20:46:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.678 20:46:39 -- common/autotest_common.sh@10 -- # set +x 00:15:14.678 20:46:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.678 20:46:39 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:14.678 20:46:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.678 20:46:39 -- common/autotest_common.sh@10 -- # set +x 00:15:14.678 [2024-04-24 20:46:39.152778] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.678 20:46:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:14.679 20:46:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.679 20:46:39 -- common/autotest_common.sh@10 -- # set +x 00:15:14.679 Malloc0 00:15:14.679 20:46:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:14.679 20:46:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.679 20:46:39 -- common/autotest_common.sh@10 -- # set +x 00:15:14.679 20:46:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:14.679 20:46:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.679 20:46:39 -- common/autotest_common.sh@10 -- # set +x 00:15:14.679 20:46:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.679 20:46:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.679 20:46:39 -- common/autotest_common.sh@10 -- # set +x 00:15:14.679 [2024-04-24 20:46:39.225019] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.679 20:46:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2738504 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@30 -- # READ_PID=2738507 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:14.679 20:46:39 -- nvmf/common.sh@521 -- # config=() 00:15:14.679 20:46:39 -- nvmf/common.sh@521 -- # local subsystem config 00:15:14.679 20:46:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:14.679 20:46:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:14.679 { 00:15:14.679 "params": { 00:15:14.679 "name": "Nvme$subsystem", 00:15:14.679 "trtype": "$TEST_TRANSPORT", 00:15:14.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.679 "adrfam": "ipv4", 00:15:14.679 "trsvcid": "$NVMF_PORT", 00:15:14.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.679 "hdgst": ${hdgst:-false}, 00:15:14.679 "ddgst": ${ddgst:-false} 00:15:14.679 }, 00:15:14.679 "method": "bdev_nvme_attach_controller" 00:15:14.679 } 00:15:14.679 EOF 00:15:14.679 )") 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2738509 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:14.679 20:46:39 -- nvmf/common.sh@521 -- # config=() 00:15:14.679 20:46:39 -- nvmf/common.sh@521 -- # local subsystem config 00:15:14.679 20:46:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2738513 00:15:14.679 20:46:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:14.679 { 00:15:14.679 "params": { 00:15:14.679 "name": "Nvme$subsystem", 00:15:14.679 "trtype": "$TEST_TRANSPORT", 00:15:14.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.679 "adrfam": "ipv4", 00:15:14.679 "trsvcid": "$NVMF_PORT", 00:15:14.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.679 "hdgst": ${hdgst:-false}, 00:15:14.679 "ddgst": ${ddgst:-false} 00:15:14.679 }, 00:15:14.679 "method": "bdev_nvme_attach_controller" 00:15:14.679 } 00:15:14.679 EOF 00:15:14.679 )") 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@35 -- # sync 00:15:14.679 20:46:39 -- nvmf/common.sh@521 -- # config=() 00:15:14.679 20:46:39 -- nvmf/common.sh@543 -- # cat 00:15:14.679 20:46:39 -- nvmf/common.sh@521 -- # local subsystem config 00:15:14.679 20:46:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:14.679 20:46:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:14.679 { 00:15:14.679 "params": { 00:15:14.679 "name": "Nvme$subsystem", 00:15:14.679 "trtype": "$TEST_TRANSPORT", 00:15:14.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.679 "adrfam": "ipv4", 00:15:14.679 "trsvcid": "$NVMF_PORT", 00:15:14.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.679 "hdgst": ${hdgst:-false}, 00:15:14.679 "ddgst": ${ddgst:-false} 00:15:14.679 }, 00:15:14.679 "method": "bdev_nvme_attach_controller" 00:15:14.679 } 00:15:14.679 EOF 00:15:14.679 )") 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:14.679 20:46:39 -- nvmf/common.sh@521 -- # config=() 00:15:14.679 20:46:39 -- nvmf/common.sh@543 -- # cat 00:15:14.679 20:46:39 -- nvmf/common.sh@521 -- # local subsystem config 00:15:14.679 20:46:39 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:14.679 20:46:39 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:14.679 { 00:15:14.679 "params": { 00:15:14.679 "name": "Nvme$subsystem", 00:15:14.679 "trtype": "$TEST_TRANSPORT", 00:15:14.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.679 "adrfam": "ipv4", 00:15:14.679 "trsvcid": "$NVMF_PORT", 00:15:14.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.679 "hdgst": ${hdgst:-false}, 00:15:14.679 "ddgst": ${ddgst:-false} 00:15:14.679 }, 00:15:14.679 "method": "bdev_nvme_attach_controller" 00:15:14.679 } 00:15:14.679 EOF 00:15:14.679 )") 00:15:14.679 20:46:39 -- nvmf/common.sh@543 -- # cat 00:15:14.679 20:46:39 -- target/bdev_io_wait.sh@37 -- # wait 2738504 00:15:14.679 20:46:39 -- nvmf/common.sh@543 -- # cat 00:15:14.679 20:46:39 -- nvmf/common.sh@545 -- # jq . 00:15:14.679 20:46:39 -- nvmf/common.sh@545 -- # jq . 00:15:14.679 20:46:39 -- nvmf/common.sh@545 -- # jq . 00:15:14.679 20:46:39 -- nvmf/common.sh@546 -- # IFS=, 00:15:14.679 20:46:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:14.679 "params": { 00:15:14.679 "name": "Nvme1", 00:15:14.679 "trtype": "tcp", 00:15:14.679 "traddr": "10.0.0.2", 00:15:14.679 "adrfam": "ipv4", 00:15:14.679 "trsvcid": "4420", 00:15:14.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.679 "hdgst": false, 00:15:14.679 "ddgst": false 00:15:14.679 }, 00:15:14.679 "method": "bdev_nvme_attach_controller" 00:15:14.679 }' 00:15:14.679 20:46:39 -- nvmf/common.sh@545 -- # jq . 00:15:14.679 20:46:39 -- nvmf/common.sh@546 -- # IFS=, 00:15:14.679 20:46:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:14.679 "params": { 00:15:14.679 "name": "Nvme1", 00:15:14.679 "trtype": "tcp", 00:15:14.679 "traddr": "10.0.0.2", 00:15:14.679 "adrfam": "ipv4", 00:15:14.679 "trsvcid": "4420", 00:15:14.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.679 "hdgst": false, 00:15:14.679 "ddgst": false 00:15:14.679 }, 00:15:14.679 "method": "bdev_nvme_attach_controller" 00:15:14.679 }' 00:15:14.679 20:46:39 -- nvmf/common.sh@546 -- # IFS=, 00:15:14.679 20:46:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:14.679 "params": { 00:15:14.679 "name": "Nvme1", 00:15:14.679 "trtype": "tcp", 00:15:14.679 "traddr": "10.0.0.2", 00:15:14.679 "adrfam": "ipv4", 00:15:14.679 "trsvcid": "4420", 00:15:14.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.679 "hdgst": false, 00:15:14.679 "ddgst": false 00:15:14.679 }, 00:15:14.679 "method": "bdev_nvme_attach_controller" 00:15:14.679 }' 00:15:14.679 20:46:39 -- nvmf/common.sh@546 -- # IFS=, 00:15:14.679 20:46:39 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:14.679 "params": { 00:15:14.679 "name": "Nvme1", 00:15:14.679 "trtype": "tcp", 00:15:14.679 "traddr": "10.0.0.2", 00:15:14.679 "adrfam": "ipv4", 00:15:14.679 "trsvcid": "4420", 00:15:14.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.679 "hdgst": false, 00:15:14.679 "ddgst": false 00:15:14.679 }, 00:15:14.679 "method": "bdev_nvme_attach_controller" 00:15:14.679 }' 00:15:14.679 [2024-04-24 20:46:39.278233] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:15:14.679 [2024-04-24 20:46:39.278283] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:14.679 [2024-04-24 20:46:39.278970] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:15:14.679 [2024-04-24 20:46:39.279013] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:14.679 [2024-04-24 20:46:39.280030] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:15:14.679 [2024-04-24 20:46:39.280076] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:14.679 [2024-04-24 20:46:39.282364] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:15:14.680 [2024-04-24 20:46:39.282435] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:14.941 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.941 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.941 [2024-04-24 20:46:39.429269] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.941 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.941 [2024-04-24 20:46:39.478662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:14.941 [2024-04-24 20:46:39.485593] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.941 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.941 [2024-04-24 20:46:39.534289] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.941 [2024-04-24 20:46:39.535181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:15.201 [2024-04-24 20:46:39.581778] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.201 [2024-04-24 20:46:39.583143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:15.201 [2024-04-24 20:46:39.629253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:15.201 Running I/O for 1 seconds... 00:15:15.201 Running I/O for 1 seconds... 00:15:15.201 Running I/O for 1 seconds... 00:15:15.462 Running I/O for 1 seconds... 00:15:16.405 00:15:16.405 Latency(us) 00:15:16.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.405 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:16.405 Nvme1n1 : 1.01 11267.04 44.01 0.00 0.00 11287.24 4450.99 18459.31 00:15:16.405 =================================================================================================================== 00:15:16.405 Total : 11267.04 44.01 0.00 0.00 11287.24 4450.99 18459.31 00:15:16.405 00:15:16.405 Latency(us) 00:15:16.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.405 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:16.405 Nvme1n1 : 1.00 193654.35 756.46 0.00 0.00 658.43 261.12 1181.01 00:15:16.405 =================================================================================================================== 00:15:16.405 Total : 193654.35 756.46 0.00 0.00 658.43 261.12 1181.01 00:15:16.405 00:15:16.405 Latency(us) 00:15:16.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.405 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:16.405 Nvme1n1 : 1.00 10986.12 42.91 0.00 0.00 11625.09 3495.25 25995.95 00:15:16.405 =================================================================================================================== 00:15:16.405 Total : 10986.12 42.91 0.00 0.00 11625.09 3495.25 25995.95 00:15:16.405 00:15:16.405 Latency(us) 00:15:16.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.406 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:16.406 Nvme1n1 : 1.01 12547.22 49.01 0.00 0.00 10165.40 6225.92 21299.20 00:15:16.406 =================================================================================================================== 00:15:16.406 Total : 12547.22 49.01 0.00 0.00 10165.40 6225.92 21299.20 00:15:16.406 20:46:40 -- target/bdev_io_wait.sh@38 -- # wait 2738507 00:15:16.667 20:46:41 -- target/bdev_io_wait.sh@39 -- # wait 2738509 00:15:16.667 20:46:41 -- target/bdev_io_wait.sh@40 -- # wait 2738513 00:15:16.667 20:46:41 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:16.667 20:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:16.667 20:46:41 -- common/autotest_common.sh@10 -- # set +x 00:15:16.667 20:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:16.667 20:46:41 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:16.667 20:46:41 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:16.667 20:46:41 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:16.667 20:46:41 -- nvmf/common.sh@117 -- # sync 00:15:16.667 20:46:41 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:16.667 20:46:41 -- nvmf/common.sh@120 -- # set +e 00:15:16.667 20:46:41 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:16.667 20:46:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:16.667 rmmod nvme_tcp 00:15:16.667 rmmod nvme_fabrics 00:15:16.667 rmmod nvme_keyring 00:15:16.667 20:46:41 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:16.667 20:46:41 -- nvmf/common.sh@124 -- # set -e 00:15:16.667 20:46:41 -- nvmf/common.sh@125 -- # return 0 00:15:16.667 20:46:41 -- nvmf/common.sh@478 -- # '[' -n 2738412 ']' 00:15:16.667 20:46:41 -- nvmf/common.sh@479 -- # killprocess 2738412 00:15:16.667 20:46:41 -- common/autotest_common.sh@936 -- # '[' -z 2738412 ']' 00:15:16.667 20:46:41 -- common/autotest_common.sh@940 -- # kill -0 2738412 00:15:16.667 20:46:41 -- common/autotest_common.sh@941 -- # uname 00:15:16.667 20:46:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:16.667 20:46:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2738412 00:15:16.667 20:46:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:16.667 20:46:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:16.667 20:46:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2738412' 00:15:16.667 killing process with pid 2738412 00:15:16.667 20:46:41 -- common/autotest_common.sh@955 -- # kill 2738412 00:15:16.667 20:46:41 -- common/autotest_common.sh@960 -- # wait 2738412 00:15:16.929 20:46:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:16.929 20:46:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:16.929 20:46:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:16.929 20:46:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:16.929 20:46:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:16.929 20:46:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.929 20:46:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.929 20:46:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:18.846 20:46:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:18.846 00:15:18.846 real 0m12.225s 00:15:18.846 user 0m19.375s 00:15:18.846 sys 0m6.473s 00:15:18.846 20:46:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:18.846 20:46:43 -- common/autotest_common.sh@10 -- # set +x 00:15:18.846 ************************************ 00:15:18.846 END TEST nvmf_bdev_io_wait 00:15:18.846 ************************************ 00:15:18.846 20:46:43 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:18.846 20:46:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:18.846 20:46:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:18.846 20:46:43 -- common/autotest_common.sh@10 -- # set +x 00:15:19.109 ************************************ 00:15:19.109 START TEST nvmf_queue_depth 00:15:19.109 ************************************ 00:15:19.109 20:46:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:19.109 * Looking for test storage... 00:15:19.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:19.109 20:46:43 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.109 20:46:43 -- nvmf/common.sh@7 -- # uname -s 00:15:19.109 20:46:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.109 20:46:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.109 20:46:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.109 20:46:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.109 20:46:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.109 20:46:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.109 20:46:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.109 20:46:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.109 20:46:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.109 20:46:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.109 20:46:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:19.109 20:46:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:19.109 20:46:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.109 20:46:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.109 20:46:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:19.109 20:46:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.109 20:46:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:19.109 20:46:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.109 20:46:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.109 20:46:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.109 20:46:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.109 20:46:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.109 20:46:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.109 20:46:43 -- paths/export.sh@5 -- # export PATH 00:15:19.109 20:46:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.109 20:46:43 -- nvmf/common.sh@47 -- # : 0 00:15:19.109 20:46:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:19.109 20:46:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:19.109 20:46:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.109 20:46:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.109 20:46:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.109 20:46:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:19.109 20:46:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:19.109 20:46:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:19.371 20:46:43 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:19.371 20:46:43 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:19.371 20:46:43 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:19.371 20:46:43 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:19.371 20:46:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:19.371 20:46:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.371 20:46:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:19.371 20:46:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:19.371 20:46:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:19.371 20:46:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.371 20:46:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.371 20:46:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.371 20:46:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:19.371 20:46:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:19.371 20:46:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:19.371 20:46:43 -- common/autotest_common.sh@10 -- # set +x 00:15:25.968 20:46:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:25.968 20:46:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.968 20:46:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.968 20:46:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.968 20:46:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.968 20:46:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.968 20:46:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.968 20:46:50 -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.968 20:46:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.968 20:46:50 -- nvmf/common.sh@296 -- # e810=() 00:15:25.968 20:46:50 -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.968 20:46:50 -- nvmf/common.sh@297 -- # x722=() 00:15:25.968 20:46:50 -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.968 20:46:50 -- nvmf/common.sh@298 -- # mlx=() 00:15:25.968 20:46:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.968 20:46:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.968 20:46:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.968 20:46:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.968 20:46:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.968 20:46:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.968 20:46:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.968 20:46:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.968 20:46:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.968 20:46:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.968 20:46:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.968 20:46:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.968 20:46:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.968 20:46:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.968 20:46:50 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:25.968 20:46:50 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:25.968 20:46:50 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:25.968 20:46:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.968 20:46:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.968 20:46:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:25.968 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:25.968 20:46:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.968 20:46:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.968 20:46:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.969 20:46:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.969 20:46:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.969 20:46:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.969 20:46:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:25.969 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:25.969 20:46:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.969 20:46:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.969 20:46:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.969 20:46:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.969 20:46:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.969 20:46:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.969 20:46:50 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:25.969 20:46:50 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:25.969 20:46:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.969 20:46:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.969 20:46:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:25.969 20:46:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.969 20:46:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:25.969 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:25.969 20:46:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.969 20:46:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.969 20:46:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.969 20:46:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:25.969 20:46:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.969 20:46:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:25.969 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:25.969 20:46:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.969 20:46:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:25.969 20:46:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:25.969 20:46:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:25.969 20:46:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:25.969 20:46:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:25.969 20:46:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.969 20:46:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.969 20:46:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.969 20:46:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.969 20:46:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.969 20:46:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.969 20:46:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.969 20:46:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.969 20:46:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.969 20:46:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.969 20:46:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.969 20:46:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.969 20:46:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:26.230 20:46:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:26.230 20:46:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:26.230 20:46:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:26.230 20:46:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:26.230 20:46:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:26.230 20:46:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:26.230 20:46:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:26.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:15:26.490 00:15:26.490 --- 10.0.0.2 ping statistics --- 00:15:26.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.490 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:15:26.490 20:46:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:26.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:15:26.490 00:15:26.490 --- 10.0.0.1 ping statistics --- 00:15:26.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.490 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:15:26.490 20:46:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.490 20:46:50 -- nvmf/common.sh@411 -- # return 0 00:15:26.490 20:46:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:26.490 20:46:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.490 20:46:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:26.490 20:46:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:26.490 20:46:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.490 20:46:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:26.490 20:46:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:26.490 20:46:50 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:26.490 20:46:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:26.490 20:46:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:26.490 20:46:50 -- common/autotest_common.sh@10 -- # set +x 00:15:26.490 20:46:50 -- nvmf/common.sh@470 -- # nvmfpid=2743191 00:15:26.490 20:46:50 -- nvmf/common.sh@471 -- # waitforlisten 2743191 00:15:26.490 20:46:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:26.490 20:46:50 -- common/autotest_common.sh@817 -- # '[' -z 2743191 ']' 00:15:26.490 20:46:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.490 20:46:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:26.490 20:46:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.490 20:46:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:26.490 20:46:50 -- common/autotest_common.sh@10 -- # set +x 00:15:26.490 [2024-04-24 20:46:50.990838] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:15:26.490 [2024-04-24 20:46:50.990900] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.490 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.490 [2024-04-24 20:46:51.063532] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:26.751 [2024-04-24 20:46:51.134881] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.751 [2024-04-24 20:46:51.134922] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.751 [2024-04-24 20:46:51.134930] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.751 [2024-04-24 20:46:51.134936] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.751 [2024-04-24 20:46:51.134943] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.751 [2024-04-24 20:46:51.134966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.751 20:46:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:26.751 20:46:51 -- common/autotest_common.sh@850 -- # return 0 00:15:26.751 20:46:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:26.751 20:46:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:26.751 20:46:51 -- common/autotest_common.sh@10 -- # set +x 00:15:26.751 20:46:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.751 20:46:51 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:26.751 20:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.751 20:46:51 -- common/autotest_common.sh@10 -- # set +x 00:15:26.751 [2024-04-24 20:46:51.255992] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.751 20:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.751 20:46:51 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:26.751 20:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.751 20:46:51 -- common/autotest_common.sh@10 -- # set +x 00:15:26.751 Malloc0 00:15:26.751 20:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.751 20:46:51 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:26.751 20:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.751 20:46:51 -- common/autotest_common.sh@10 -- # set +x 00:15:26.751 20:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.751 20:46:51 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:26.751 20:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.751 20:46:51 -- common/autotest_common.sh@10 -- # set +x 00:15:26.751 20:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.751 20:46:51 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.751 20:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:26.751 20:46:51 -- common/autotest_common.sh@10 -- # set +x 00:15:26.751 [2024-04-24 20:46:51.323995] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.751 20:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:26.751 20:46:51 -- target/queue_depth.sh@30 -- # bdevperf_pid=2743211 00:15:26.751 20:46:51 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:26.751 20:46:51 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:26.751 20:46:51 -- target/queue_depth.sh@33 -- # waitforlisten 2743211 /var/tmp/bdevperf.sock 00:15:26.751 20:46:51 -- common/autotest_common.sh@817 -- # '[' -z 2743211 ']' 00:15:26.751 20:46:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:26.751 20:46:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:26.751 20:46:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:26.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:26.752 20:46:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:26.752 20:46:51 -- common/autotest_common.sh@10 -- # set +x 00:15:26.752 [2024-04-24 20:46:51.383721] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:15:26.752 [2024-04-24 20:46:51.383779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2743211 ] 00:15:27.012 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.012 [2024-04-24 20:46:51.457524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.012 [2024-04-24 20:46:51.520289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.968 20:46:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:27.968 20:46:52 -- common/autotest_common.sh@850 -- # return 0 00:15:27.968 20:46:52 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:27.968 20:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:27.968 20:46:52 -- common/autotest_common.sh@10 -- # set +x 00:15:27.968 NVMe0n1 00:15:27.968 20:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:27.968 20:46:52 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:27.968 Running I/O for 10 seconds... 00:15:38.008 00:15:38.008 Latency(us) 00:15:38.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.008 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:38.008 Verification LBA range: start 0x0 length 0x4000 00:15:38.008 NVMe0n1 : 10.09 9401.79 36.73 0.00 0.00 108416.82 24794.45 73837.23 00:15:38.008 =================================================================================================================== 00:15:38.008 Total : 9401.79 36.73 0.00 0.00 108416.82 24794.45 73837.23 00:15:38.008 0 00:15:38.008 20:47:02 -- target/queue_depth.sh@39 -- # killprocess 2743211 00:15:38.008 20:47:02 -- common/autotest_common.sh@936 -- # '[' -z 2743211 ']' 00:15:38.008 20:47:02 -- common/autotest_common.sh@940 -- # kill -0 2743211 00:15:38.008 20:47:02 -- common/autotest_common.sh@941 -- # uname 00:15:38.008 20:47:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.008 20:47:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2743211 00:15:38.008 20:47:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:38.008 20:47:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:38.008 20:47:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2743211' 00:15:38.008 killing process with pid 2743211 00:15:38.008 20:47:02 -- common/autotest_common.sh@955 -- # kill 2743211 00:15:38.008 Received shutdown signal, test time was about 10.000000 seconds 00:15:38.008 00:15:38.008 Latency(us) 00:15:38.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:38.008 =================================================================================================================== 00:15:38.008 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:38.008 20:47:02 -- common/autotest_common.sh@960 -- # wait 2743211 00:15:38.269 20:47:02 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:38.269 20:47:02 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:38.269 20:47:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:38.269 20:47:02 -- nvmf/common.sh@117 -- # sync 00:15:38.269 20:47:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.269 20:47:02 -- nvmf/common.sh@120 -- # set +e 00:15:38.269 20:47:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.269 20:47:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.269 rmmod nvme_tcp 00:15:38.269 rmmod nvme_fabrics 00:15:38.269 rmmod nvme_keyring 00:15:38.269 20:47:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.269 20:47:02 -- nvmf/common.sh@124 -- # set -e 00:15:38.269 20:47:02 -- nvmf/common.sh@125 -- # return 0 00:15:38.269 20:47:02 -- nvmf/common.sh@478 -- # '[' -n 2743191 ']' 00:15:38.269 20:47:02 -- nvmf/common.sh@479 -- # killprocess 2743191 00:15:38.269 20:47:02 -- common/autotest_common.sh@936 -- # '[' -z 2743191 ']' 00:15:38.269 20:47:02 -- common/autotest_common.sh@940 -- # kill -0 2743191 00:15:38.269 20:47:02 -- common/autotest_common.sh@941 -- # uname 00:15:38.269 20:47:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.269 20:47:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2743191 00:15:38.269 20:47:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:38.269 20:47:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:38.269 20:47:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2743191' 00:15:38.269 killing process with pid 2743191 00:15:38.269 20:47:02 -- common/autotest_common.sh@955 -- # kill 2743191 00:15:38.269 20:47:02 -- common/autotest_common.sh@960 -- # wait 2743191 00:15:38.530 20:47:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:38.531 20:47:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:38.531 20:47:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:38.531 20:47:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.531 20:47:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.531 20:47:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.531 20:47:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.531 20:47:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.441 20:47:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:40.442 00:15:40.442 real 0m21.467s 00:15:40.442 user 0m25.245s 00:15:40.442 sys 0m6.388s 00:15:40.442 20:47:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:40.442 20:47:05 -- common/autotest_common.sh@10 -- # set +x 00:15:40.703 ************************************ 00:15:40.703 END TEST nvmf_queue_depth 00:15:40.703 ************************************ 00:15:40.703 20:47:05 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:40.703 20:47:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:40.703 20:47:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:40.703 20:47:05 -- common/autotest_common.sh@10 -- # set +x 00:15:40.703 ************************************ 00:15:40.703 START TEST nvmf_multipath 00:15:40.703 ************************************ 00:15:40.703 20:47:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:40.966 * Looking for test storage... 00:15:40.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.966 20:47:05 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.966 20:47:05 -- nvmf/common.sh@7 -- # uname -s 00:15:40.966 20:47:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.966 20:47:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.966 20:47:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.966 20:47:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.966 20:47:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.966 20:47:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.966 20:47:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.966 20:47:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.966 20:47:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.966 20:47:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.966 20:47:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:40.966 20:47:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:40.966 20:47:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.966 20:47:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.966 20:47:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.966 20:47:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.966 20:47:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.966 20:47:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.966 20:47:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.966 20:47:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.966 20:47:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.966 20:47:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.966 20:47:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.966 20:47:05 -- paths/export.sh@5 -- # export PATH 00:15:40.966 20:47:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.966 20:47:05 -- nvmf/common.sh@47 -- # : 0 00:15:40.966 20:47:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.966 20:47:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.966 20:47:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.966 20:47:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.966 20:47:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.966 20:47:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.966 20:47:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.966 20:47:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.966 20:47:05 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.966 20:47:05 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:40.966 20:47:05 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:40.966 20:47:05 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.966 20:47:05 -- target/multipath.sh@43 -- # nvmftestinit 00:15:40.966 20:47:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:40.966 20:47:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.966 20:47:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:40.966 20:47:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:40.966 20:47:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:40.966 20:47:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.966 20:47:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.966 20:47:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.966 20:47:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:40.966 20:47:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:40.966 20:47:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:40.966 20:47:05 -- common/autotest_common.sh@10 -- # set +x 00:15:49.118 20:47:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:49.118 20:47:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:49.118 20:47:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:49.118 20:47:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:49.118 20:47:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:49.118 20:47:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:49.118 20:47:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:49.118 20:47:12 -- nvmf/common.sh@295 -- # net_devs=() 00:15:49.118 20:47:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:49.118 20:47:12 -- nvmf/common.sh@296 -- # e810=() 00:15:49.118 20:47:12 -- nvmf/common.sh@296 -- # local -ga e810 00:15:49.118 20:47:12 -- nvmf/common.sh@297 -- # x722=() 00:15:49.118 20:47:12 -- nvmf/common.sh@297 -- # local -ga x722 00:15:49.119 20:47:12 -- nvmf/common.sh@298 -- # mlx=() 00:15:49.119 20:47:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:49.119 20:47:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:49.119 20:47:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:49.119 20:47:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:49.119 20:47:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:49.119 20:47:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:49.119 20:47:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:49.119 20:47:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:49.119 20:47:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:49.119 20:47:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:49.119 20:47:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:49.119 20:47:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:49.119 20:47:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:49.119 20:47:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:49.119 20:47:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:49.119 20:47:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.119 20:47:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:49.119 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:49.119 20:47:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:49.119 20:47:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:49.119 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:49.119 20:47:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:49.119 20:47:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.119 20:47:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.119 20:47:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:49.119 20:47:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.119 20:47:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:49.119 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:49.119 20:47:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.119 20:47:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:49.119 20:47:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:49.119 20:47:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:49.119 20:47:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:49.119 20:47:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:49.119 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:49.119 20:47:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:49.119 20:47:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:49.119 20:47:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:49.119 20:47:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:49.119 20:47:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.119 20:47:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:49.119 20:47:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:49.119 20:47:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:49.119 20:47:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:49.119 20:47:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:49.119 20:47:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:49.119 20:47:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:49.119 20:47:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.119 20:47:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:49.119 20:47:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:49.119 20:47:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:49.119 20:47:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:49.119 20:47:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:49.119 20:47:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:49.119 20:47:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:49.119 20:47:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:49.119 20:47:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:49.119 20:47:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:49.119 20:47:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:49.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:15:49.119 00:15:49.119 --- 10.0.0.2 ping statistics --- 00:15:49.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.119 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:15:49.119 20:47:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:49.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:15:49.119 00:15:49.119 --- 10.0.0.1 ping statistics --- 00:15:49.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.119 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:15:49.119 20:47:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.119 20:47:12 -- nvmf/common.sh@411 -- # return 0 00:15:49.119 20:47:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:49.119 20:47:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.119 20:47:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.119 20:47:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:49.119 20:47:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:49.119 20:47:12 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:49.119 20:47:12 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:49.119 only one NIC for nvmf test 00:15:49.119 20:47:12 -- target/multipath.sh@47 -- # nvmftestfini 00:15:49.119 20:47:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:49.119 20:47:12 -- nvmf/common.sh@117 -- # sync 00:15:49.119 20:47:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:49.119 20:47:12 -- nvmf/common.sh@120 -- # set +e 00:15:49.119 20:47:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:49.119 20:47:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:49.119 rmmod nvme_tcp 00:15:49.119 rmmod nvme_fabrics 00:15:49.119 rmmod nvme_keyring 00:15:49.119 20:47:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:49.119 20:47:12 -- nvmf/common.sh@124 -- # set -e 00:15:49.119 20:47:12 -- nvmf/common.sh@125 -- # return 0 00:15:49.119 20:47:12 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:49.119 20:47:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:49.119 20:47:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:49.119 20:47:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:49.119 20:47:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:49.119 20:47:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.119 20:47:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:49.119 20:47:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.510 20:47:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:50.510 20:47:14 -- target/multipath.sh@48 -- # exit 0 00:15:50.510 20:47:14 -- target/multipath.sh@1 -- # nvmftestfini 00:15:50.510 20:47:14 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:50.510 20:47:14 -- nvmf/common.sh@117 -- # sync 00:15:50.510 20:47:14 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:50.510 20:47:14 -- nvmf/common.sh@120 -- # set +e 00:15:50.510 20:47:14 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:50.510 20:47:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:50.510 20:47:14 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:50.510 20:47:14 -- nvmf/common.sh@124 -- # set -e 00:15:50.510 20:47:14 -- nvmf/common.sh@125 -- # return 0 00:15:50.510 20:47:14 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:15:50.510 20:47:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:50.510 20:47:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:50.510 20:47:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:50.510 20:47:14 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:50.510 20:47:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:50.511 20:47:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.511 20:47:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.511 20:47:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.511 20:47:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:50.511 00:15:50.511 real 0m9.625s 00:15:50.511 user 0m2.102s 00:15:50.511 sys 0m5.432s 00:15:50.511 20:47:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:50.511 20:47:14 -- common/autotest_common.sh@10 -- # set +x 00:15:50.511 ************************************ 00:15:50.511 END TEST nvmf_multipath 00:15:50.511 ************************************ 00:15:50.511 20:47:14 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:50.511 20:47:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:50.511 20:47:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:50.511 20:47:14 -- common/autotest_common.sh@10 -- # set +x 00:15:50.511 ************************************ 00:15:50.511 START TEST nvmf_zcopy 00:15:50.511 ************************************ 00:15:50.511 20:47:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:50.772 * Looking for test storage... 00:15:50.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.773 20:47:15 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.773 20:47:15 -- nvmf/common.sh@7 -- # uname -s 00:15:50.773 20:47:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.773 20:47:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.773 20:47:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.773 20:47:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.773 20:47:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:50.773 20:47:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:50.773 20:47:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.773 20:47:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:50.773 20:47:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.773 20:47:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:50.773 20:47:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:50.773 20:47:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:15:50.773 20:47:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.773 20:47:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:50.773 20:47:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:50.773 20:47:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.773 20:47:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.773 20:47:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.773 20:47:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.773 20:47:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.773 20:47:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.773 20:47:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.773 20:47:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.773 20:47:15 -- paths/export.sh@5 -- # export PATH 00:15:50.773 20:47:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.773 20:47:15 -- nvmf/common.sh@47 -- # : 0 00:15:50.773 20:47:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:50.773 20:47:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:50.773 20:47:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:50.773 20:47:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.773 20:47:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.773 20:47:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:50.773 20:47:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:50.773 20:47:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:50.773 20:47:15 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:50.773 20:47:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:50.773 20:47:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.773 20:47:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:50.773 20:47:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:50.773 20:47:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:50.773 20:47:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:50.773 20:47:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:50.773 20:47:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:50.773 20:47:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:15:50.773 20:47:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:50.773 20:47:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:50.773 20:47:15 -- common/autotest_common.sh@10 -- # set +x 00:15:58.921 20:47:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:58.921 20:47:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:58.921 20:47:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:58.921 20:47:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:58.921 20:47:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:58.921 20:47:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:58.921 20:47:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:58.921 20:47:22 -- nvmf/common.sh@295 -- # net_devs=() 00:15:58.921 20:47:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:58.921 20:47:22 -- nvmf/common.sh@296 -- # e810=() 00:15:58.921 20:47:22 -- nvmf/common.sh@296 -- # local -ga e810 00:15:58.921 20:47:22 -- nvmf/common.sh@297 -- # x722=() 00:15:58.921 20:47:22 -- nvmf/common.sh@297 -- # local -ga x722 00:15:58.921 20:47:22 -- nvmf/common.sh@298 -- # mlx=() 00:15:58.921 20:47:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:58.921 20:47:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:58.921 20:47:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:58.921 20:47:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:58.921 20:47:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:58.921 20:47:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:58.921 20:47:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:58.921 20:47:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:58.921 20:47:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:58.921 20:47:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:58.921 20:47:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:58.921 20:47:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:58.921 20:47:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:58.921 20:47:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:58.921 20:47:22 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:58.921 20:47:22 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:58.921 20:47:22 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:58.921 20:47:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:58.921 20:47:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.921 20:47:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:58.921 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:58.921 20:47:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:58.921 20:47:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:58.921 20:47:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.921 20:47:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.921 20:47:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:58.922 20:47:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:58.922 20:47:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:58.922 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:58.922 20:47:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:58.922 20:47:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:58.922 20:47:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:58.922 20:47:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:58.922 20:47:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:58.922 20:47:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:58.922 20:47:22 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:58.922 20:47:22 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:58.922 20:47:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.922 20:47:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.922 20:47:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:58.922 20:47:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.922 20:47:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:58.922 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:58.922 20:47:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.922 20:47:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:58.922 20:47:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.922 20:47:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:58.922 20:47:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.922 20:47:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:58.922 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:58.922 20:47:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.922 20:47:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:58.922 20:47:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:58.922 20:47:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:58.922 20:47:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:58.922 20:47:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:58.922 20:47:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:58.922 20:47:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:58.922 20:47:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:58.922 20:47:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:58.922 20:47:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:58.922 20:47:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:58.922 20:47:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:58.922 20:47:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:58.922 20:47:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:58.922 20:47:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:58.922 20:47:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:58.922 20:47:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:58.922 20:47:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:58.922 20:47:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:58.922 20:47:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:58.922 20:47:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:58.922 20:47:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:58.922 20:47:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:58.922 20:47:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:58.922 20:47:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:58.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:15:58.922 00:15:58.922 --- 10.0.0.2 ping statistics --- 00:15:58.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.922 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:15:58.922 20:47:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:58.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:15:58.922 00:15:58.922 --- 10.0.0.1 ping statistics --- 00:15:58.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.922 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:15:58.922 20:47:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.922 20:47:22 -- nvmf/common.sh@411 -- # return 0 00:15:58.922 20:47:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:58.922 20:47:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.922 20:47:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:58.922 20:47:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:58.922 20:47:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.922 20:47:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:58.922 20:47:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:58.922 20:47:22 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:58.922 20:47:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:58.922 20:47:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:58.922 20:47:22 -- common/autotest_common.sh@10 -- # set +x 00:15:58.922 20:47:22 -- nvmf/common.sh@470 -- # nvmfpid=2753883 00:15:58.922 20:47:22 -- nvmf/common.sh@471 -- # waitforlisten 2753883 00:15:58.922 20:47:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:58.922 20:47:22 -- common/autotest_common.sh@817 -- # '[' -z 2753883 ']' 00:15:58.922 20:47:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.922 20:47:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:58.922 20:47:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.922 20:47:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:58.922 20:47:22 -- common/autotest_common.sh@10 -- # set +x 00:15:58.922 [2024-04-24 20:47:22.572933] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:15:58.922 [2024-04-24 20:47:22.572986] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.922 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.922 [2024-04-24 20:47:22.638646] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.922 [2024-04-24 20:47:22.703923] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.922 [2024-04-24 20:47:22.703960] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.922 [2024-04-24 20:47:22.703968] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.922 [2024-04-24 20:47:22.703977] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.922 [2024-04-24 20:47:22.703983] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.922 [2024-04-24 20:47:22.704006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.922 20:47:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:58.922 20:47:22 -- common/autotest_common.sh@850 -- # return 0 00:15:58.922 20:47:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:58.922 20:47:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:58.922 20:47:22 -- common/autotest_common.sh@10 -- # set +x 00:15:58.922 20:47:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.922 20:47:22 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:58.922 20:47:22 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:58.922 20:47:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.922 20:47:22 -- common/autotest_common.sh@10 -- # set +x 00:15:58.922 [2024-04-24 20:47:22.836651] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.922 20:47:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.922 20:47:22 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:58.922 20:47:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.922 20:47:22 -- common/autotest_common.sh@10 -- # set +x 00:15:58.922 20:47:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.922 20:47:22 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.922 20:47:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.922 20:47:22 -- common/autotest_common.sh@10 -- # set +x 00:15:58.922 [2024-04-24 20:47:22.860858] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.922 20:47:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.922 20:47:22 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:58.922 20:47:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.922 20:47:22 -- common/autotest_common.sh@10 -- # set +x 00:15:58.922 20:47:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.922 20:47:22 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:58.922 20:47:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.922 20:47:22 -- common/autotest_common.sh@10 -- # set +x 00:15:58.922 malloc0 00:15:58.922 20:47:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.922 20:47:22 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:58.922 20:47:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:58.922 20:47:22 -- common/autotest_common.sh@10 -- # set +x 00:15:58.922 20:47:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:58.922 20:47:22 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:58.922 20:47:22 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:58.922 20:47:22 -- nvmf/common.sh@521 -- # config=() 00:15:58.922 20:47:22 -- nvmf/common.sh@521 -- # local subsystem config 00:15:58.922 20:47:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:58.922 20:47:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:58.922 { 00:15:58.922 "params": { 00:15:58.922 "name": "Nvme$subsystem", 00:15:58.922 "trtype": "$TEST_TRANSPORT", 00:15:58.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:58.922 "adrfam": "ipv4", 00:15:58.922 "trsvcid": "$NVMF_PORT", 00:15:58.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:58.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:58.923 "hdgst": ${hdgst:-false}, 00:15:58.923 "ddgst": ${ddgst:-false} 00:15:58.923 }, 00:15:58.923 "method": "bdev_nvme_attach_controller" 00:15:58.923 } 00:15:58.923 EOF 00:15:58.923 )") 00:15:58.923 20:47:22 -- nvmf/common.sh@543 -- # cat 00:15:58.923 20:47:22 -- nvmf/common.sh@545 -- # jq . 00:15:58.923 20:47:22 -- nvmf/common.sh@546 -- # IFS=, 00:15:58.923 20:47:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:58.923 "params": { 00:15:58.923 "name": "Nvme1", 00:15:58.923 "trtype": "tcp", 00:15:58.923 "traddr": "10.0.0.2", 00:15:58.923 "adrfam": "ipv4", 00:15:58.923 "trsvcid": "4420", 00:15:58.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:58.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:58.923 "hdgst": false, 00:15:58.923 "ddgst": false 00:15:58.923 }, 00:15:58.923 "method": "bdev_nvme_attach_controller" 00:15:58.923 }' 00:15:58.923 [2024-04-24 20:47:22.951430] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:15:58.923 [2024-04-24 20:47:22.951477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2753911 ] 00:15:58.923 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.923 [2024-04-24 20:47:23.026046] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.923 [2024-04-24 20:47:23.088832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.923 Running I/O for 10 seconds... 00:16:08.952 00:16:08.952 Latency(us) 00:16:08.952 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.952 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:08.952 Verification LBA range: start 0x0 length 0x1000 00:16:08.952 Nvme1n1 : 10.02 6618.48 51.71 0.00 0.00 19283.80 3276.80 27197.44 00:16:08.952 =================================================================================================================== 00:16:08.952 Total : 6618.48 51.71 0.00 0.00 19283.80 3276.80 27197.44 00:16:08.952 20:47:33 -- target/zcopy.sh@39 -- # perfpid=2755974 00:16:08.952 20:47:33 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:08.952 20:47:33 -- common/autotest_common.sh@10 -- # set +x 00:16:08.952 20:47:33 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:08.952 20:47:33 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:08.952 20:47:33 -- nvmf/common.sh@521 -- # config=() 00:16:08.952 20:47:33 -- nvmf/common.sh@521 -- # local subsystem config 00:16:08.952 20:47:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:08.952 20:47:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:08.952 { 00:16:08.952 "params": { 00:16:08.952 "name": "Nvme$subsystem", 00:16:08.952 "trtype": "$TEST_TRANSPORT", 00:16:08.952 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:08.952 "adrfam": "ipv4", 00:16:08.952 "trsvcid": "$NVMF_PORT", 00:16:08.952 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:08.952 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:08.952 "hdgst": ${hdgst:-false}, 00:16:08.952 "ddgst": ${ddgst:-false} 00:16:08.952 }, 00:16:08.952 "method": "bdev_nvme_attach_controller" 00:16:08.952 } 00:16:08.952 EOF 00:16:08.952 )") 00:16:08.952 20:47:33 -- nvmf/common.sh@543 -- # cat 00:16:08.952 [2024-04-24 20:47:33.537570] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.952 [2024-04-24 20:47:33.537601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.952 20:47:33 -- nvmf/common.sh@545 -- # jq . 00:16:08.952 20:47:33 -- nvmf/common.sh@546 -- # IFS=, 00:16:08.952 20:47:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:08.952 "params": { 00:16:08.952 "name": "Nvme1", 00:16:08.952 "trtype": "tcp", 00:16:08.952 "traddr": "10.0.0.2", 00:16:08.952 "adrfam": "ipv4", 00:16:08.952 "trsvcid": "4420", 00:16:08.952 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:08.952 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:08.952 "hdgst": false, 00:16:08.952 "ddgst": false 00:16:08.952 }, 00:16:08.952 "method": "bdev_nvme_attach_controller" 00:16:08.952 }' 00:16:08.952 [2024-04-24 20:47:33.549572] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.952 [2024-04-24 20:47:33.549583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.952 [2024-04-24 20:47:33.561601] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.952 [2024-04-24 20:47:33.561612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.952 [2024-04-24 20:47:33.573635] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.952 [2024-04-24 20:47:33.573646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:08.952 [2024-04-24 20:47:33.575995] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:16:08.952 [2024-04-24 20:47:33.576044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755974 ] 00:16:08.953 [2024-04-24 20:47:33.585669] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:08.953 [2024-04-24 20:47:33.585680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.597702] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.597714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.213 [2024-04-24 20:47:33.609739] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.609749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.621771] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.621781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.633798] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.633808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.645829] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.645839] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.650389] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.213 [2024-04-24 20:47:33.657861] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.657873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.669892] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.669904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.681926] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.681938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.693959] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.693972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.705992] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.706003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.713085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.213 [2024-04-24 20:47:33.718022] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.718032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.730062] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.730078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.742092] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.742104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.754121] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.754132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.766154] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.766166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.778183] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.778194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.790227] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.790249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.802253] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.802266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.814286] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.814299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.826315] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.826327] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.838346] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.838356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.213 [2024-04-24 20:47:33.850377] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.213 [2024-04-24 20:47:33.850387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:33.862412] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:33.862423] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:33.874446] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:33.874459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:33.886478] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:33.886489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:33.898512] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:33.898522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:33.910546] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:33.910557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:33.922579] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:33.922591] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:33.934618] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:33.934635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 Running I/O for 5 seconds... 00:16:09.474 [2024-04-24 20:47:33.946648] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:33.946659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:33.963272] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:33.963292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:33.980373] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:33.980393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:33.995264] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:33.995283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:34.012251] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:34.012270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:34.028472] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:34.028491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:34.045914] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:34.045937] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:34.062301] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:34.062320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:34.079641] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:34.079661] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:34.095358] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:34.095378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.474 [2024-04-24 20:47:34.106737] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.474 [2024-04-24 20:47:34.106755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.122897] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.122915] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.140756] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.140775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.156884] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.156902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.175003] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.175022] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.189588] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.189606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.206487] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.206505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.223230] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.223248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.241328] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.241347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.257855] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.257873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.275965] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.275984] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.292644] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.292662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.310018] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.310036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.326497] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.326514] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.344681] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.344699] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.734 [2024-04-24 20:47:34.361673] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.734 [2024-04-24 20:47:34.361695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.993 [2024-04-24 20:47:34.378580] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.378598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.395348] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.395366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.412492] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.412510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.428521] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.428539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.445968] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.445986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.462736] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.462754] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.479318] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.479336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.496540] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.496559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.513324] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.513342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.531211] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.531237] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.547876] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.547896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.564395] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.564413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.581847] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.581865] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.598591] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.598609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.615271] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.615289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:09.994 [2024-04-24 20:47:34.633156] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:09.994 [2024-04-24 20:47:34.633174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.647914] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.647932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.664522] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.664540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.681304] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.681322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.696628] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.696646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.712604] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.712621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.730976] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.730994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.746662] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.746679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.764126] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.764145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.780827] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.780845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.799087] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.799105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.814889] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.814907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.832246] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.832265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.847935] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.847953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.859426] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.859444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.253 [2024-04-24 20:47:34.876069] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.253 [2024-04-24 20:47:34.876087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:34.893515] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:34.893533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:34.910351] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:34.910369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:34.928414] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:34.928432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:34.943580] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:34.943598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:34.954709] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:34.954732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:34.971235] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:34.971254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:34.987039] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:34.987058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:34.998298] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:34.998317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:35.014645] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:35.014664] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:35.032048] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:35.032066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:35.048822] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:35.048840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:35.065618] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:35.065636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:35.082490] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:35.082509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:35.098905] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:35.098923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:35.115141] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:35.115160] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:35.132766] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:35.132785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.514 [2024-04-24 20:47:35.148422] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.514 [2024-04-24 20:47:35.148441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.159947] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.159965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.176861] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.176880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.193654] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.193673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.211114] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.211132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.227693] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.227711] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.245102] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.245120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.260926] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.260944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.278003] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.278021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.293711] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.293734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.305226] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.305244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.321904] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.321923] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.337715] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.337739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.349156] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.349174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.365827] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.365845] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.381700] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.381718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:10.774 [2024-04-24 20:47:35.399110] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:10.774 [2024-04-24 20:47:35.399128] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.034 [2024-04-24 20:47:35.416595] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.034 [2024-04-24 20:47:35.416614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.034 [2024-04-24 20:47:35.431569] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.034 [2024-04-24 20:47:35.431588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.034 [2024-04-24 20:47:35.448653] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.034 [2024-04-24 20:47:35.448672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.034 [2024-04-24 20:47:35.464985] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.034 [2024-04-24 20:47:35.465003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.034 [2024-04-24 20:47:35.482182] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.034 [2024-04-24 20:47:35.482200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.034 [2024-04-24 20:47:35.498526] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.034 [2024-04-24 20:47:35.498545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.034 [2024-04-24 20:47:35.514908] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.034 [2024-04-24 20:47:35.514927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.034 [2024-04-24 20:47:35.533050] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.034 [2024-04-24 20:47:35.533069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.034 [2024-04-24 20:47:35.549874] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.035 [2024-04-24 20:47:35.549892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.035 [2024-04-24 20:47:35.567237] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.035 [2024-04-24 20:47:35.567256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.035 [2024-04-24 20:47:35.583230] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.035 [2024-04-24 20:47:35.583249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.035 [2024-04-24 20:47:35.601000] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.035 [2024-04-24 20:47:35.601020] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.035 [2024-04-24 20:47:35.617472] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.035 [2024-04-24 20:47:35.617490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.035 [2024-04-24 20:47:35.634952] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.035 [2024-04-24 20:47:35.634970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.035 [2024-04-24 20:47:35.650458] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.035 [2024-04-24 20:47:35.650476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.035 [2024-04-24 20:47:35.668054] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.035 [2024-04-24 20:47:35.668072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.294 [2024-04-24 20:47:35.683737] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.294 [2024-04-24 20:47:35.683755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.294 [2024-04-24 20:47:35.695211] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.294 [2024-04-24 20:47:35.695230] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.294 [2024-04-24 20:47:35.712339] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.294 [2024-04-24 20:47:35.712357] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.294 [2024-04-24 20:47:35.728902] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.294 [2024-04-24 20:47:35.728920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.294 [2024-04-24 20:47:35.746832] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.295 [2024-04-24 20:47:35.746850] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.295 [2024-04-24 20:47:35.762863] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.295 [2024-04-24 20:47:35.762880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.295 [2024-04-24 20:47:35.779782] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.295 [2024-04-24 20:47:35.779800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.295 [2024-04-24 20:47:35.796058] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.295 [2024-04-24 20:47:35.796080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.295 [2024-04-24 20:47:35.813946] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.295 [2024-04-24 20:47:35.813965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.295 [2024-04-24 20:47:35.829523] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.295 [2024-04-24 20:47:35.829542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.295 [2024-04-24 20:47:35.840614] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.295 [2024-04-24 20:47:35.840633] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.295 [2024-04-24 20:47:35.857372] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.295 [2024-04-24 20:47:35.857390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.295 [2024-04-24 20:47:35.873579] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.295 [2024-04-24 20:47:35.873597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.295 [2024-04-24 20:47:35.889081] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.295 [2024-04-24 20:47:35.889104] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.295 [2024-04-24 20:47:35.905918] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.295 [2024-04-24 20:47:35.905936] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.295 [2024-04-24 20:47:35.923160] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.295 [2024-04-24 20:47:35.923177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:35.940060] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:35.940078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:35.957752] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:35.957770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:35.973792] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:35.973810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:35.991735] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:35.991753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:36.008349] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:36.008367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:36.020612] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:36.020630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:36.037079] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:36.037097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:36.054030] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:36.054048] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:36.070742] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:36.070761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:36.087555] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:36.087573] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:36.104795] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:36.104813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:36.120616] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:36.120634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:36.137240] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:36.137259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:36.154057] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:36.154075] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:36.170806] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:36.170824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.556 [2024-04-24 20:47:36.188359] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.556 [2024-04-24 20:47:36.188377] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.203914] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.203938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.215904] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.215922] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.231773] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.231791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.249348] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.249366] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.266259] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.266277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.282391] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.282409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.294132] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.294150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.310493] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.310511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.327514] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.327532] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.344615] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.344633] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.361099] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.361116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.379294] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.379312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.394941] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.394958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.411922] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.411940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.428233] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.428251] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.816 [2024-04-24 20:47:36.445381] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:11.816 [2024-04-24 20:47:36.445399] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.462041] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.462059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.479009] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.479027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.495628] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.495646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.512173] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.512195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.528367] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.528386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.539846] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.539864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.556617] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.556635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.572350] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.572367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.590357] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.590375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.605638] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.605658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.617164] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.617183] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.633871] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.633890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.650838] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.650856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.667293] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.667311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.684090] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.684107] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.076 [2024-04-24 20:47:36.700255] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.076 [2024-04-24 20:47:36.700273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.717403] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.717421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.734343] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.734361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.751683] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.751702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.767520] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.767539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.778832] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.778851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.794957] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.794975] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.812508] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.812530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.828808] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.828826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.846829] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.846848] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.863050] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.863069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.881339] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.881358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.896317] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.896336] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.912859] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.912878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.930173] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.930191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.946779] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.946797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.337 [2024-04-24 20:47:36.963411] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.337 [2024-04-24 20:47:36.963429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:36.980073] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:36.980092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:36.996577] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:36.996596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.014488] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.014506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.030468] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.030486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.047576] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.047599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.064549] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.064569] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.081601] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.081621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.098387] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.098406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.115005] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.115023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.132167] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.132186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.147837] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.147856] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.158981] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.159000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.175931] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.175950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.192587] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.192605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.210504] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.210523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.226144] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.226163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.598 [2024-04-24 20:47:37.237871] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.598 [2024-04-24 20:47:37.237888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.254985] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.255004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.269602] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.269621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.286047] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.286065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.303760] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.303779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.319510] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.319529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.330742] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.330760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.347248] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.347266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.364750] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.364768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.381179] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.381197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.398369] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.398387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.414858] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.414876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.430512] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.430531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.441924] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.441942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.459802] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.459820] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.473984] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.474002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:12.860 [2024-04-24 20:47:37.490494] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:12.860 [2024-04-24 20:47:37.490512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.508107] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.508125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.524650] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.524668] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.542614] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.542633] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.558635] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.558653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.575770] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.575788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.592992] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.593010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.610760] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.610778] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.628396] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.628415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.644215] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.644233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.661994] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.662011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.679702] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.679720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.694764] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.694783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.710665] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.710683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.728155] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.728174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.744118] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.744136] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.122 [2024-04-24 20:47:37.761772] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.122 [2024-04-24 20:47:37.761790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.383 [2024-04-24 20:47:37.777782] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.383 [2024-04-24 20:47:37.777801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.383 [2024-04-24 20:47:37.789371] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.383 [2024-04-24 20:47:37.789390] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.806175] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:37.806193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.821855] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:37.821874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.833414] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:37.833432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.850522] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:37.850540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.865289] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:37.865307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.882352] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:37.882370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.898147] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:37.898165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.914930] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:37.914948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.931626] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:37.931644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.948435] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:37.948453] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.965635] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:37.965654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.982395] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:37.982414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:37.999998] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:38.000016] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.384 [2024-04-24 20:47:38.015186] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.384 [2024-04-24 20:47:38.015205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.645 [2024-04-24 20:47:38.031224] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.645 [2024-04-24 20:47:38.031242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.645 [2024-04-24 20:47:38.048432] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.645 [2024-04-24 20:47:38.048450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.645 [2024-04-24 20:47:38.065305] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.645 [2024-04-24 20:47:38.065323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.645 [2024-04-24 20:47:38.082303] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.645 [2024-04-24 20:47:38.082322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.645 [2024-04-24 20:47:38.098218] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.645 [2024-04-24 20:47:38.098236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.645 [2024-04-24 20:47:38.116385] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.645 [2024-04-24 20:47:38.116404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.645 [2024-04-24 20:47:38.131310] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.645 [2024-04-24 20:47:38.131329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.645 [2024-04-24 20:47:38.147626] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.645 [2024-04-24 20:47:38.147644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.645 [2024-04-24 20:47:38.164407] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.645 [2024-04-24 20:47:38.164425] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.645 [2024-04-24 20:47:38.181102] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.645 [2024-04-24 20:47:38.181120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.646 [2024-04-24 20:47:38.197674] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.646 [2024-04-24 20:47:38.197692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.646 [2024-04-24 20:47:38.215380] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.646 [2024-04-24 20:47:38.215398] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.646 [2024-04-24 20:47:38.232195] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.646 [2024-04-24 20:47:38.232213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.646 [2024-04-24 20:47:38.250198] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.646 [2024-04-24 20:47:38.250216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.646 [2024-04-24 20:47:38.264216] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.646 [2024-04-24 20:47:38.264233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.646 [2024-04-24 20:47:38.280954] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.646 [2024-04-24 20:47:38.280976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.297468] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.297488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.314159] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.314177] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.330763] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.330780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.348597] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.348620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.365488] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.365506] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.382397] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.382415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.400519] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.400537] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.415684] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.415703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.431331] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.431349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.448599] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.448617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.464592] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.464610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.482864] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.482883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.498098] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.498116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.513746] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.513764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.531332] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.531351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:13.907 [2024-04-24 20:47:38.546707] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:13.907 [2024-04-24 20:47:38.546731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.168 [2024-04-24 20:47:38.558146] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.168 [2024-04-24 20:47:38.558165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.168 [2024-04-24 20:47:38.574789] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.168 [2024-04-24 20:47:38.574807] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.168 [2024-04-24 20:47:38.589847] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.589866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.607072] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.607090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.622198] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.622217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.639034] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.639053] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.654206] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.654228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.670450] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.670468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.687662] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.687680] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.703706] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.703729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.714491] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.714509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.731178] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.731197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.747984] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.748002] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.765165] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.765184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.781476] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.781494] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.169 [2024-04-24 20:47:38.797625] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.169 [2024-04-24 20:47:38.797644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:38.809105] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.809124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:38.825299] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.825317] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:38.842396] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.842414] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:38.859596] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.859615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:38.876352] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.876371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:38.893373] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.893392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:38.909435] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.909453] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:38.926697] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.926715] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:38.943612] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.943630] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:38.960114] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.960137] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 00:16:14.430 Latency(us) 00:16:14.430 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.430 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:14.430 Nvme1n1 : 5.01 13159.90 102.81 0.00 0.00 9715.06 4450.99 20206.93 00:16:14.430 =================================================================================================================== 00:16:14.430 Total : 13159.90 102.81 0.00 0.00 9715.06 4450.99 20206.93 00:16:14.430 [2024-04-24 20:47:38.971903] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.971920] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:38.983935] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.983950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:38.995963] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:38.995978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:39.007993] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:39.008009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:39.020023] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:39.020035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:39.032054] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:39.032066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:39.044085] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:39.044095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:39.056118] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:39.056132] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.430 [2024-04-24 20:47:39.068147] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.430 [2024-04-24 20:47:39.068157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.691 [2024-04-24 20:47:39.080184] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.691 [2024-04-24 20:47:39.080197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.691 [2024-04-24 20:47:39.092213] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:14.691 [2024-04-24 20:47:39.092223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:14.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2755974) - No such process 00:16:14.691 20:47:39 -- target/zcopy.sh@49 -- # wait 2755974 00:16:14.691 20:47:39 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.691 20:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.691 20:47:39 -- common/autotest_common.sh@10 -- # set +x 00:16:14.691 20:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.691 20:47:39 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:14.691 20:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.691 20:47:39 -- common/autotest_common.sh@10 -- # set +x 00:16:14.691 delay0 00:16:14.691 20:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.691 20:47:39 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:14.691 20:47:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:14.691 20:47:39 -- common/autotest_common.sh@10 -- # set +x 00:16:14.691 20:47:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:14.691 20:47:39 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:14.691 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.692 [2024-04-24 20:47:39.293874] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:21.274 Initializing NVMe Controllers 00:16:21.274 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:21.274 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:21.274 Initialization complete. Launching workers. 00:16:21.274 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 397 00:16:21.274 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 684, failed to submit 33 00:16:21.274 success 541, unsuccess 143, failed 0 00:16:21.274 20:47:45 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:21.274 20:47:45 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:21.274 20:47:45 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:21.274 20:47:45 -- nvmf/common.sh@117 -- # sync 00:16:21.274 20:47:45 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:21.274 20:47:45 -- nvmf/common.sh@120 -- # set +e 00:16:21.274 20:47:45 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:21.274 20:47:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:21.274 rmmod nvme_tcp 00:16:21.274 rmmod nvme_fabrics 00:16:21.274 rmmod nvme_keyring 00:16:21.274 20:47:45 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.274 20:47:45 -- nvmf/common.sh@124 -- # set -e 00:16:21.274 20:47:45 -- nvmf/common.sh@125 -- # return 0 00:16:21.274 20:47:45 -- nvmf/common.sh@478 -- # '[' -n 2753883 ']' 00:16:21.274 20:47:45 -- nvmf/common.sh@479 -- # killprocess 2753883 00:16:21.274 20:47:45 -- common/autotest_common.sh@936 -- # '[' -z 2753883 ']' 00:16:21.275 20:47:45 -- common/autotest_common.sh@940 -- # kill -0 2753883 00:16:21.275 20:47:45 -- common/autotest_common.sh@941 -- # uname 00:16:21.275 20:47:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:21.275 20:47:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2753883 00:16:21.275 20:47:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:21.275 20:47:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:21.275 20:47:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2753883' 00:16:21.275 killing process with pid 2753883 00:16:21.275 20:47:45 -- common/autotest_common.sh@955 -- # kill 2753883 00:16:21.275 20:47:45 -- common/autotest_common.sh@960 -- # wait 2753883 00:16:21.275 20:47:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:21.275 20:47:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:21.275 20:47:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:21.275 20:47:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.275 20:47:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.275 20:47:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.275 20:47:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.275 20:47:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.191 20:47:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:23.191 00:16:23.191 real 0m32.695s 00:16:23.191 user 0m44.445s 00:16:23.191 sys 0m9.809s 00:16:23.191 20:47:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:23.191 20:47:47 -- common/autotest_common.sh@10 -- # set +x 00:16:23.191 ************************************ 00:16:23.191 END TEST nvmf_zcopy 00:16:23.191 ************************************ 00:16:23.191 20:47:47 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:23.191 20:47:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:23.191 20:47:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:23.191 20:47:47 -- common/autotest_common.sh@10 -- # set +x 00:16:23.453 ************************************ 00:16:23.453 START TEST nvmf_nmic 00:16:23.453 ************************************ 00:16:23.453 20:47:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:23.453 * Looking for test storage... 00:16:23.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.453 20:47:48 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.453 20:47:48 -- nvmf/common.sh@7 -- # uname -s 00:16:23.453 20:47:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.453 20:47:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.453 20:47:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.453 20:47:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.453 20:47:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.453 20:47:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.453 20:47:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.453 20:47:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.453 20:47:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.453 20:47:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.453 20:47:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:23.453 20:47:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:23.453 20:47:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.453 20:47:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.453 20:47:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.453 20:47:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.453 20:47:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.453 20:47:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.453 20:47:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.453 20:47:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.453 20:47:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.453 20:47:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.453 20:47:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.453 20:47:48 -- paths/export.sh@5 -- # export PATH 00:16:23.453 20:47:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.453 20:47:48 -- nvmf/common.sh@47 -- # : 0 00:16:23.453 20:47:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.453 20:47:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.453 20:47:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.453 20:47:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.453 20:47:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.453 20:47:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.453 20:47:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.453 20:47:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.453 20:47:48 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:23.453 20:47:48 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:23.453 20:47:48 -- target/nmic.sh@14 -- # nvmftestinit 00:16:23.453 20:47:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:23.453 20:47:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.453 20:47:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:23.453 20:47:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:23.453 20:47:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:23.453 20:47:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.454 20:47:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.454 20:47:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.454 20:47:48 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:23.454 20:47:48 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:23.454 20:47:48 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:23.454 20:47:48 -- common/autotest_common.sh@10 -- # set +x 00:16:30.181 20:47:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:30.181 20:47:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:30.181 20:47:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:30.181 20:47:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:30.181 20:47:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:30.181 20:47:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:30.181 20:47:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:30.181 20:47:54 -- nvmf/common.sh@295 -- # net_devs=() 00:16:30.181 20:47:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:30.181 20:47:54 -- nvmf/common.sh@296 -- # e810=() 00:16:30.181 20:47:54 -- nvmf/common.sh@296 -- # local -ga e810 00:16:30.181 20:47:54 -- nvmf/common.sh@297 -- # x722=() 00:16:30.181 20:47:54 -- nvmf/common.sh@297 -- # local -ga x722 00:16:30.181 20:47:54 -- nvmf/common.sh@298 -- # mlx=() 00:16:30.181 20:47:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:30.181 20:47:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.181 20:47:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.181 20:47:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.181 20:47:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.181 20:47:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.181 20:47:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.181 20:47:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.181 20:47:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.181 20:47:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.181 20:47:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.181 20:47:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.181 20:47:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:30.181 20:47:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:30.181 20:47:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:30.181 20:47:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.181 20:47:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:30.181 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:30.181 20:47:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:30.181 20:47:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:30.181 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:30.181 20:47:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:30.181 20:47:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.181 20:47:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.181 20:47:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:30.181 20:47:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.181 20:47:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:30.181 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:30.181 20:47:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.181 20:47:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:30.181 20:47:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.181 20:47:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:30.181 20:47:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.181 20:47:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:30.181 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:30.181 20:47:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.181 20:47:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:30.181 20:47:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:30.181 20:47:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:30.181 20:47:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:30.181 20:47:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.181 20:47:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.181 20:47:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.181 20:47:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:30.181 20:47:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.181 20:47:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.181 20:47:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:30.181 20:47:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.181 20:47:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.181 20:47:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:30.181 20:47:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:30.181 20:47:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.181 20:47:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.181 20:47:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.181 20:47:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.181 20:47:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:30.181 20:47:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:30.181 20:47:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:30.181 20:47:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:30.181 20:47:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:30.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:16:30.181 00:16:30.181 --- 10.0.0.2 ping statistics --- 00:16:30.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.181 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:16:30.181 20:47:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:30.442 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.442 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:16:30.442 00:16:30.443 --- 10.0.0.1 ping statistics --- 00:16:30.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.443 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:16:30.443 20:47:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.443 20:47:54 -- nvmf/common.sh@411 -- # return 0 00:16:30.443 20:47:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:30.443 20:47:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.443 20:47:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:30.443 20:47:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:30.443 20:47:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.443 20:47:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:30.443 20:47:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:30.443 20:47:54 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:30.443 20:47:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:30.443 20:47:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:30.443 20:47:54 -- common/autotest_common.sh@10 -- # set +x 00:16:30.443 20:47:54 -- nvmf/common.sh@470 -- # nvmfpid=2762452 00:16:30.443 20:47:54 -- nvmf/common.sh@471 -- # waitforlisten 2762452 00:16:30.443 20:47:54 -- common/autotest_common.sh@817 -- # '[' -z 2762452 ']' 00:16:30.443 20:47:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.443 20:47:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:30.443 20:47:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.443 20:47:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:30.443 20:47:54 -- common/autotest_common.sh@10 -- # set +x 00:16:30.443 20:47:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:30.443 [2024-04-24 20:47:54.921176] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:16:30.443 [2024-04-24 20:47:54.921241] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.443 EAL: No free 2048 kB hugepages reported on node 1 00:16:30.443 [2024-04-24 20:47:55.009729] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.704 [2024-04-24 20:47:55.105050] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.704 [2024-04-24 20:47:55.105112] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.704 [2024-04-24 20:47:55.105120] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.704 [2024-04-24 20:47:55.105127] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.704 [2024-04-24 20:47:55.105134] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.704 [2024-04-24 20:47:55.105270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.704 [2024-04-24 20:47:55.105398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.704 [2024-04-24 20:47:55.105563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.704 [2024-04-24 20:47:55.105565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.275 20:47:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:31.275 20:47:55 -- common/autotest_common.sh@850 -- # return 0 00:16:31.275 20:47:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:31.275 20:47:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:31.275 20:47:55 -- common/autotest_common.sh@10 -- # set +x 00:16:31.275 20:47:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.275 20:47:55 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.275 20:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.275 20:47:55 -- common/autotest_common.sh@10 -- # set +x 00:16:31.275 [2024-04-24 20:47:55.845531] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.275 20:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.275 20:47:55 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:31.275 20:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.275 20:47:55 -- common/autotest_common.sh@10 -- # set +x 00:16:31.275 Malloc0 00:16:31.275 20:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.275 20:47:55 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:31.275 20:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.275 20:47:55 -- common/autotest_common.sh@10 -- # set +x 00:16:31.275 20:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.275 20:47:55 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:31.275 20:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.275 20:47:55 -- common/autotest_common.sh@10 -- # set +x 00:16:31.275 20:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.275 20:47:55 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.275 20:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.275 20:47:55 -- common/autotest_common.sh@10 -- # set +x 00:16:31.275 [2024-04-24 20:47:55.904911] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.275 20:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.275 20:47:55 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:31.275 test case1: single bdev can't be used in multiple subsystems 00:16:31.275 20:47:55 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:31.275 20:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.275 20:47:55 -- common/autotest_common.sh@10 -- # set +x 00:16:31.536 20:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.536 20:47:55 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:31.536 20:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.536 20:47:55 -- common/autotest_common.sh@10 -- # set +x 00:16:31.536 20:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.536 20:47:55 -- target/nmic.sh@28 -- # nmic_status=0 00:16:31.536 20:47:55 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:31.536 20:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.536 20:47:55 -- common/autotest_common.sh@10 -- # set +x 00:16:31.536 [2024-04-24 20:47:55.940847] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:31.536 [2024-04-24 20:47:55.940865] subsystem.c:1934:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:31.536 [2024-04-24 20:47:55.940872] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:31.536 request: 00:16:31.536 { 00:16:31.536 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:31.536 "namespace": { 00:16:31.536 "bdev_name": "Malloc0", 00:16:31.536 "no_auto_visible": false 00:16:31.536 }, 00:16:31.536 "method": "nvmf_subsystem_add_ns", 00:16:31.536 "req_id": 1 00:16:31.536 } 00:16:31.536 Got JSON-RPC error response 00:16:31.536 response: 00:16:31.536 { 00:16:31.536 "code": -32602, 00:16:31.536 "message": "Invalid parameters" 00:16:31.536 } 00:16:31.536 20:47:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:31.536 20:47:55 -- target/nmic.sh@29 -- # nmic_status=1 00:16:31.536 20:47:55 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:31.536 20:47:55 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:31.536 Adding namespace failed - expected result. 00:16:31.536 20:47:55 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:31.536 test case2: host connect to nvmf target in multiple paths 00:16:31.536 20:47:55 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:31.536 20:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.536 20:47:55 -- common/autotest_common.sh@10 -- # set +x 00:16:31.536 [2024-04-24 20:47:55.952980] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:31.536 20:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.536 20:47:55 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:32.923 20:47:57 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:34.307 20:47:58 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:34.307 20:47:58 -- common/autotest_common.sh@1184 -- # local i=0 00:16:34.307 20:47:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.307 20:47:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:16:34.307 20:47:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:36.855 20:48:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:36.855 20:48:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:36.855 20:48:00 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.855 20:48:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:16:36.855 20:48:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.855 20:48:00 -- common/autotest_common.sh@1194 -- # return 0 00:16:36.855 20:48:00 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:36.855 [global] 00:16:36.855 thread=1 00:16:36.855 invalidate=1 00:16:36.855 rw=write 00:16:36.855 time_based=1 00:16:36.855 runtime=1 00:16:36.855 ioengine=libaio 00:16:36.855 direct=1 00:16:36.855 bs=4096 00:16:36.855 iodepth=1 00:16:36.855 norandommap=0 00:16:36.855 numjobs=1 00:16:36.855 00:16:36.855 verify_dump=1 00:16:36.855 verify_backlog=512 00:16:36.855 verify_state_save=0 00:16:36.855 do_verify=1 00:16:36.855 verify=crc32c-intel 00:16:36.855 [job0] 00:16:36.855 filename=/dev/nvme0n1 00:16:36.855 Could not set queue depth (nvme0n1) 00:16:36.855 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:36.855 fio-3.35 00:16:36.855 Starting 1 thread 00:16:38.242 00:16:38.242 job0: (groupid=0, jobs=1): err= 0: pid=2763877: Wed Apr 24 20:48:02 2024 00:16:38.242 read: IOPS=424, BW=1698KiB/s (1739kB/s)(1700KiB/1001msec) 00:16:38.242 slat (nsec): min=6290, max=58532, avg=22144.80, stdev=6961.46 00:16:38.242 clat (usec): min=407, max=42096, avg=1618.96, stdev=5943.71 00:16:38.242 lat (usec): min=414, max=42120, avg=1641.11, stdev=5944.11 00:16:38.242 clat percentiles (usec): 00:16:38.242 | 1.00th=[ 465], 5.00th=[ 545], 10.00th=[ 594], 20.00th=[ 668], 00:16:38.242 | 30.00th=[ 693], 40.00th=[ 725], 50.00th=[ 766], 60.00th=[ 799], 00:16:38.242 | 70.00th=[ 824], 80.00th=[ 848], 90.00th=[ 881], 95.00th=[ 922], 00:16:38.242 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:38.242 | 99.99th=[42206] 00:16:38.242 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:16:38.242 slat (nsec): min=9054, max=66996, avg=27384.03, stdev=9276.18 00:16:38.242 clat (usec): min=239, max=772, avg=551.23, stdev=104.77 00:16:38.242 lat (usec): min=249, max=814, avg=578.61, stdev=108.65 00:16:38.242 clat percentiles (usec): 00:16:38.242 | 1.00th=[ 273], 5.00th=[ 351], 10.00th=[ 404], 20.00th=[ 465], 00:16:38.242 | 30.00th=[ 510], 40.00th=[ 537], 50.00th=[ 553], 60.00th=[ 578], 00:16:38.242 | 70.00th=[ 619], 80.00th=[ 652], 90.00th=[ 676], 95.00th=[ 701], 00:16:38.242 | 99.00th=[ 750], 99.50th=[ 766], 99.90th=[ 775], 99.95th=[ 775], 00:16:38.242 | 99.99th=[ 775] 00:16:38.242 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:38.242 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:38.242 lat (usec) : 250=0.43%, 500=16.01%, 750=58.38%, 1000=24.12% 00:16:38.242 lat (msec) : 2=0.11%, 50=0.96% 00:16:38.242 cpu : usr=1.20%, sys=2.50%, ctx=937, majf=0, minf=1 00:16:38.242 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:38.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:38.242 issued rwts: total=425,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:38.242 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:38.242 00:16:38.242 Run status group 0 (all jobs): 00:16:38.242 READ: bw=1698KiB/s (1739kB/s), 1698KiB/s-1698KiB/s (1739kB/s-1739kB/s), io=1700KiB (1741kB), run=1001-1001msec 00:16:38.242 WRITE: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:16:38.242 00:16:38.242 Disk stats (read/write): 00:16:38.242 nvme0n1: ios=342/512, merge=0/0, ticks=633/273, in_queue=906, util=92.99% 00:16:38.242 20:48:02 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:38.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:38.242 20:48:02 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:38.242 20:48:02 -- common/autotest_common.sh@1205 -- # local i=0 00:16:38.242 20:48:02 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:16:38.242 20:48:02 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.242 20:48:02 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:16:38.242 20:48:02 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:38.242 20:48:02 -- common/autotest_common.sh@1217 -- # return 0 00:16:38.242 20:48:02 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:38.242 20:48:02 -- target/nmic.sh@53 -- # nvmftestfini 00:16:38.242 20:48:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:38.242 20:48:02 -- nvmf/common.sh@117 -- # sync 00:16:38.242 20:48:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:38.242 20:48:02 -- nvmf/common.sh@120 -- # set +e 00:16:38.242 20:48:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:38.242 20:48:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:38.242 rmmod nvme_tcp 00:16:38.242 rmmod nvme_fabrics 00:16:38.242 rmmod nvme_keyring 00:16:38.242 20:48:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.242 20:48:02 -- nvmf/common.sh@124 -- # set -e 00:16:38.242 20:48:02 -- nvmf/common.sh@125 -- # return 0 00:16:38.242 20:48:02 -- nvmf/common.sh@478 -- # '[' -n 2762452 ']' 00:16:38.242 20:48:02 -- nvmf/common.sh@479 -- # killprocess 2762452 00:16:38.242 20:48:02 -- common/autotest_common.sh@936 -- # '[' -z 2762452 ']' 00:16:38.242 20:48:02 -- common/autotest_common.sh@940 -- # kill -0 2762452 00:16:38.242 20:48:02 -- common/autotest_common.sh@941 -- # uname 00:16:38.242 20:48:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:38.242 20:48:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2762452 00:16:38.242 20:48:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:38.242 20:48:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:38.242 20:48:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2762452' 00:16:38.242 killing process with pid 2762452 00:16:38.242 20:48:02 -- common/autotest_common.sh@955 -- # kill 2762452 00:16:38.242 20:48:02 -- common/autotest_common.sh@960 -- # wait 2762452 00:16:38.504 20:48:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:38.504 20:48:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:38.504 20:48:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:38.504 20:48:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.504 20:48:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:38.504 20:48:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.504 20:48:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.504 20:48:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.419 20:48:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:40.419 00:16:40.419 real 0m16.984s 00:16:40.419 user 0m48.303s 00:16:40.419 sys 0m6.057s 00:16:40.419 20:48:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:40.419 20:48:04 -- common/autotest_common.sh@10 -- # set +x 00:16:40.419 ************************************ 00:16:40.419 END TEST nvmf_nmic 00:16:40.419 ************************************ 00:16:40.419 20:48:05 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:40.419 20:48:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:40.419 20:48:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:40.419 20:48:05 -- common/autotest_common.sh@10 -- # set +x 00:16:40.682 ************************************ 00:16:40.682 START TEST nvmf_fio_target 00:16:40.682 ************************************ 00:16:40.682 20:48:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:40.682 * Looking for test storage... 00:16:40.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.682 20:48:05 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.682 20:48:05 -- nvmf/common.sh@7 -- # uname -s 00:16:40.682 20:48:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.682 20:48:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.682 20:48:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.682 20:48:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.682 20:48:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.682 20:48:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.682 20:48:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.682 20:48:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.682 20:48:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.682 20:48:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.682 20:48:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:40.682 20:48:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:16:40.682 20:48:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.682 20:48:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.682 20:48:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.682 20:48:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.682 20:48:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.682 20:48:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.682 20:48:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.682 20:48:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.682 20:48:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.682 20:48:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.682 20:48:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.682 20:48:05 -- paths/export.sh@5 -- # export PATH 00:16:40.682 20:48:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.682 20:48:05 -- nvmf/common.sh@47 -- # : 0 00:16:40.682 20:48:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:40.682 20:48:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:40.682 20:48:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.682 20:48:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.682 20:48:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.682 20:48:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:40.682 20:48:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:40.682 20:48:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:40.682 20:48:05 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.682 20:48:05 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.682 20:48:05 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.682 20:48:05 -- target/fio.sh@16 -- # nvmftestinit 00:16:40.682 20:48:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:40.682 20:48:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.682 20:48:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:40.682 20:48:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:40.682 20:48:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:40.682 20:48:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.682 20:48:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.682 20:48:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.682 20:48:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:16:40.682 20:48:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:40.682 20:48:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:40.682 20:48:05 -- common/autotest_common.sh@10 -- # set +x 00:16:48.830 20:48:12 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:48.830 20:48:12 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.830 20:48:12 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.830 20:48:12 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.830 20:48:12 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.830 20:48:12 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.830 20:48:12 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.830 20:48:12 -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.830 20:48:12 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.830 20:48:12 -- nvmf/common.sh@296 -- # e810=() 00:16:48.830 20:48:12 -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.830 20:48:12 -- nvmf/common.sh@297 -- # x722=() 00:16:48.830 20:48:12 -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.830 20:48:12 -- nvmf/common.sh@298 -- # mlx=() 00:16:48.830 20:48:12 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.830 20:48:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.830 20:48:12 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.830 20:48:12 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.830 20:48:12 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.830 20:48:12 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.830 20:48:12 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.830 20:48:12 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.830 20:48:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.830 20:48:12 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.830 20:48:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.830 20:48:12 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.830 20:48:12 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.830 20:48:12 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.830 20:48:12 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.830 20:48:12 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.830 20:48:12 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.831 20:48:12 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.831 20:48:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.831 20:48:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:48.831 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:48.831 20:48:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.831 20:48:12 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:48.831 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:48.831 20:48:12 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.831 20:48:12 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.831 20:48:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.831 20:48:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:48.831 20:48:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.831 20:48:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:48.831 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:48.831 20:48:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.831 20:48:12 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.831 20:48:12 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.831 20:48:12 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:48.831 20:48:12 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.831 20:48:12 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:48.831 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:48.831 20:48:12 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.831 20:48:12 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:48.831 20:48:12 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:48.831 20:48:12 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:48.831 20:48:12 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.831 20:48:12 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.831 20:48:12 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.831 20:48:12 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.831 20:48:12 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.831 20:48:12 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.831 20:48:12 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.831 20:48:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.831 20:48:12 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.831 20:48:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.831 20:48:12 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.831 20:48:12 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.831 20:48:12 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.831 20:48:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.831 20:48:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.831 20:48:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.831 20:48:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.831 20:48:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.831 20:48:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.831 20:48:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:48.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.515 ms 00:16:48.831 00:16:48.831 --- 10.0.0.2 ping statistics --- 00:16:48.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.831 rtt min/avg/max/mdev = 0.515/0.515/0.515/0.000 ms 00:16:48.831 20:48:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:16:48.831 00:16:48.831 --- 10.0.0.1 ping statistics --- 00:16:48.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.831 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:16:48.831 20:48:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.831 20:48:12 -- nvmf/common.sh@411 -- # return 0 00:16:48.831 20:48:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:48.831 20:48:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.831 20:48:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:48.831 20:48:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.831 20:48:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:48.831 20:48:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:48.831 20:48:12 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:48.831 20:48:12 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:48.831 20:48:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:48.831 20:48:12 -- common/autotest_common.sh@10 -- # set +x 00:16:48.831 20:48:12 -- nvmf/common.sh@470 -- # nvmfpid=2768914 00:16:48.831 20:48:12 -- nvmf/common.sh@471 -- # waitforlisten 2768914 00:16:48.831 20:48:12 -- common/autotest_common.sh@817 -- # '[' -z 2768914 ']' 00:16:48.831 20:48:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.831 20:48:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:48.831 20:48:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.831 20:48:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:48.831 20:48:12 -- common/autotest_common.sh@10 -- # set +x 00:16:48.831 20:48:12 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:48.831 [2024-04-24 20:48:12.606292] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:16:48.831 [2024-04-24 20:48:12.606353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.831 EAL: No free 2048 kB hugepages reported on node 1 00:16:48.831 [2024-04-24 20:48:12.693545] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.831 [2024-04-24 20:48:12.783218] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.831 [2024-04-24 20:48:12.783270] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.831 [2024-04-24 20:48:12.783278] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.831 [2024-04-24 20:48:12.783285] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.831 [2024-04-24 20:48:12.783291] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.831 [2024-04-24 20:48:12.783430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.831 [2024-04-24 20:48:12.783571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.831 [2024-04-24 20:48:12.783755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.831 [2024-04-24 20:48:12.783755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:49.092 20:48:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:49.092 20:48:13 -- common/autotest_common.sh@850 -- # return 0 00:16:49.092 20:48:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:49.092 20:48:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:49.092 20:48:13 -- common/autotest_common.sh@10 -- # set +x 00:16:49.092 20:48:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:49.092 20:48:13 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:49.092 [2024-04-24 20:48:13.715154] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:49.354 20:48:13 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.354 20:48:13 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:49.354 20:48:13 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.615 20:48:14 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:49.615 20:48:14 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:49.875 20:48:14 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:49.875 20:48:14 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.136 20:48:14 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:50.136 20:48:14 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:50.398 20:48:14 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.659 20:48:15 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:50.659 20:48:15 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.920 20:48:15 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:50.920 20:48:15 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:50.920 20:48:15 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:50.920 20:48:15 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:51.181 20:48:15 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:51.442 20:48:15 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:51.442 20:48:15 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:51.705 20:48:16 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:51.705 20:48:16 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:51.966 20:48:16 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:51.966 [2024-04-24 20:48:16.578087] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.226 20:48:16 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:52.226 20:48:16 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:52.487 20:48:17 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:54.397 20:48:18 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:54.397 20:48:18 -- common/autotest_common.sh@1184 -- # local i=0 00:16:54.398 20:48:18 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.398 20:48:18 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:16:54.398 20:48:18 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:16:54.398 20:48:18 -- common/autotest_common.sh@1191 -- # sleep 2 00:16:56.309 20:48:20 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:16:56.309 20:48:20 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:16:56.310 20:48:20 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:16:56.310 20:48:20 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:16:56.310 20:48:20 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:16:56.310 20:48:20 -- common/autotest_common.sh@1194 -- # return 0 00:16:56.310 20:48:20 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:56.310 [global] 00:16:56.310 thread=1 00:16:56.310 invalidate=1 00:16:56.310 rw=write 00:16:56.310 time_based=1 00:16:56.310 runtime=1 00:16:56.310 ioengine=libaio 00:16:56.310 direct=1 00:16:56.310 bs=4096 00:16:56.310 iodepth=1 00:16:56.310 norandommap=0 00:16:56.310 numjobs=1 00:16:56.310 00:16:56.310 verify_dump=1 00:16:56.310 verify_backlog=512 00:16:56.310 verify_state_save=0 00:16:56.310 do_verify=1 00:16:56.310 verify=crc32c-intel 00:16:56.310 [job0] 00:16:56.310 filename=/dev/nvme0n1 00:16:56.310 [job1] 00:16:56.310 filename=/dev/nvme0n2 00:16:56.310 [job2] 00:16:56.310 filename=/dev/nvme0n3 00:16:56.310 [job3] 00:16:56.310 filename=/dev/nvme0n4 00:16:56.310 Could not set queue depth (nvme0n1) 00:16:56.310 Could not set queue depth (nvme0n2) 00:16:56.310 Could not set queue depth (nvme0n3) 00:16:56.310 Could not set queue depth (nvme0n4) 00:16:56.570 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.570 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.570 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.570 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:56.570 fio-3.35 00:16:56.570 Starting 4 threads 00:16:57.969 00:16:57.969 job0: (groupid=0, jobs=1): err= 0: pid=2770774: Wed Apr 24 20:48:22 2024 00:16:57.969 read: IOPS=18, BW=73.9KiB/s (75.7kB/s)(76.0KiB/1028msec) 00:16:57.969 slat (nsec): min=25223, max=26208, avg=25738.74, stdev=242.57 00:16:57.969 clat (usec): min=1176, max=43095, avg=40086.11, stdev=9443.25 00:16:57.969 lat (usec): min=1202, max=43120, avg=40111.85, stdev=9443.33 00:16:57.969 clat percentiles (usec): 00:16:57.969 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[41157], 20.00th=[41681], 00:16:57.969 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:57.969 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:16:57.969 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:16:57.969 | 99.99th=[43254] 00:16:57.969 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:16:57.969 slat (nsec): min=9694, max=69417, avg=30818.99, stdev=9115.96 00:16:57.969 clat (usec): min=143, max=1050, avg=480.26, stdev=145.11 00:16:57.969 lat (usec): min=153, max=1083, avg=511.08, stdev=147.77 00:16:57.969 clat percentiles (usec): 00:16:57.969 | 1.00th=[ 157], 5.00th=[ 237], 10.00th=[ 293], 20.00th=[ 359], 00:16:57.969 | 30.00th=[ 400], 40.00th=[ 445], 50.00th=[ 482], 60.00th=[ 510], 00:16:57.969 | 70.00th=[ 545], 80.00th=[ 594], 90.00th=[ 685], 95.00th=[ 734], 00:16:57.969 | 99.00th=[ 816], 99.50th=[ 832], 99.90th=[ 1057], 99.95th=[ 1057], 00:16:57.969 | 99.99th=[ 1057] 00:16:57.969 bw ( KiB/s): min= 4096, max= 4096, per=38.61%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.969 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.969 lat (usec) : 250=5.46%, 500=49.53%, 750=37.85%, 1000=3.39% 00:16:57.969 lat (msec) : 2=0.38%, 50=3.39% 00:16:57.969 cpu : usr=0.39%, sys=1.85%, ctx=533, majf=0, minf=1 00:16:57.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.969 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.969 job1: (groupid=0, jobs=1): err= 0: pid=2770785: Wed Apr 24 20:48:22 2024 00:16:57.969 read: IOPS=18, BW=73.5KiB/s (75.3kB/s)(76.0KiB/1034msec) 00:16:57.969 slat (nsec): min=26003, max=26738, avg=26257.11, stdev=185.31 00:16:57.969 clat (usec): min=40759, max=42107, avg=41587.37, stdev=517.43 00:16:57.969 lat (usec): min=40785, max=42133, avg=41613.63, stdev=517.41 00:16:57.969 clat percentiles (usec): 00:16:57.969 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:16:57.969 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:16:57.969 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:57.969 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:57.969 | 99.99th=[42206] 00:16:57.969 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:16:57.969 slat (usec): min=9, max=2542, avg=35.02, stdev=111.56 00:16:57.969 clat (usec): min=218, max=688, avg=432.11, stdev=92.20 00:16:57.969 lat (usec): min=233, max=3197, avg=467.13, stdev=154.90 00:16:57.969 clat percentiles (usec): 00:16:57.969 | 1.00th=[ 233], 5.00th=[ 269], 10.00th=[ 297], 20.00th=[ 351], 00:16:57.969 | 30.00th=[ 383], 40.00th=[ 412], 50.00th=[ 437], 60.00th=[ 461], 00:16:57.969 | 70.00th=[ 490], 80.00th=[ 515], 90.00th=[ 545], 95.00th=[ 570], 00:16:57.969 | 99.00th=[ 627], 99.50th=[ 660], 99.90th=[ 693], 99.95th=[ 693], 00:16:57.969 | 99.99th=[ 693] 00:16:57.969 bw ( KiB/s): min= 4096, max= 4096, per=38.61%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.969 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.969 lat (usec) : 250=1.69%, 500=70.81%, 750=23.92% 00:16:57.969 lat (msec) : 50=3.58% 00:16:57.969 cpu : usr=0.39%, sys=1.84%, ctx=533, majf=0, minf=1 00:16:57.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.969 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.969 job2: (groupid=0, jobs=1): err= 0: pid=2770803: Wed Apr 24 20:48:22 2024 00:16:57.969 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:16:57.969 slat (nsec): min=8650, max=60022, avg=26003.26, stdev=3047.93 00:16:57.969 clat (usec): min=649, max=1230, avg=1012.08, stdev=85.40 00:16:57.969 lat (usec): min=675, max=1256, avg=1038.08, stdev=85.20 00:16:57.969 clat percentiles (usec): 00:16:57.969 | 1.00th=[ 799], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 947], 00:16:57.969 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1029], 60.00th=[ 1045], 00:16:57.969 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1106], 95.00th=[ 1139], 00:16:57.969 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1237], 99.95th=[ 1237], 00:16:57.969 | 99.99th=[ 1237] 00:16:57.969 write: IOPS=693, BW=2773KiB/s (2840kB/s)(2776KiB/1001msec); 0 zone resets 00:16:57.969 slat (nsec): min=9838, max=56584, avg=30460.36, stdev=9673.77 00:16:57.969 clat (usec): min=261, max=971, avg=631.50, stdev=138.54 00:16:57.969 lat (usec): min=274, max=1005, avg=661.96, stdev=142.43 00:16:57.969 clat percentiles (usec): 00:16:57.969 | 1.00th=[ 302], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 510], 00:16:57.969 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 685], 00:16:57.969 | 70.00th=[ 709], 80.00th=[ 750], 90.00th=[ 807], 95.00th=[ 857], 00:16:57.969 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 971], 99.95th=[ 971], 00:16:57.969 | 99.99th=[ 971] 00:16:57.969 bw ( KiB/s): min= 4096, max= 4096, per=38.61%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.969 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.969 lat (usec) : 500=10.86%, 750=35.66%, 1000=27.61% 00:16:57.969 lat (msec) : 2=25.87% 00:16:57.969 cpu : usr=1.60%, sys=3.80%, ctx=1207, majf=0, minf=1 00:16:57.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.969 issued rwts: total=512,694,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.969 job3: (groupid=0, jobs=1): err= 0: pid=2770809: Wed Apr 24 20:48:22 2024 00:16:57.969 read: IOPS=604, BW=2418KiB/s (2476kB/s)(2420KiB/1001msec) 00:16:57.969 slat (nsec): min=6814, max=58784, avg=23398.24, stdev=7580.82 00:16:57.969 clat (usec): min=487, max=1057, avg=807.66, stdev=69.33 00:16:57.969 lat (usec): min=513, max=1064, avg=831.06, stdev=70.73 00:16:57.969 clat percentiles (usec): 00:16:57.969 | 1.00th=[ 578], 5.00th=[ 685], 10.00th=[ 725], 20.00th=[ 766], 00:16:57.969 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 832], 00:16:57.969 | 70.00th=[ 848], 80.00th=[ 865], 90.00th=[ 881], 95.00th=[ 898], 00:16:57.969 | 99.00th=[ 930], 99.50th=[ 938], 99.90th=[ 1057], 99.95th=[ 1057], 00:16:57.969 | 99.99th=[ 1057] 00:16:57.969 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:16:57.969 slat (nsec): min=9843, max=52779, avg=28467.37, stdev=10183.28 00:16:57.969 clat (usec): min=229, max=747, avg=446.40, stdev=72.35 00:16:57.969 lat (usec): min=239, max=781, avg=474.87, stdev=75.38 00:16:57.969 clat percentiles (usec): 00:16:57.969 | 1.00th=[ 277], 5.00th=[ 326], 10.00th=[ 351], 20.00th=[ 388], 00:16:57.969 | 30.00th=[ 420], 40.00th=[ 441], 50.00th=[ 453], 60.00th=[ 465], 00:16:57.969 | 70.00th=[ 482], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 553], 00:16:57.970 | 99.00th=[ 676], 99.50th=[ 701], 99.90th=[ 734], 99.95th=[ 750], 00:16:57.970 | 99.99th=[ 750] 00:16:57.970 bw ( KiB/s): min= 4096, max= 4096, per=38.61%, avg=4096.00, stdev= 0.00, samples=1 00:16:57.970 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:57.970 lat (usec) : 250=0.18%, 500=52.79%, 750=15.72%, 1000=31.25% 00:16:57.970 lat (msec) : 2=0.06% 00:16:57.970 cpu : usr=2.90%, sys=3.80%, ctx=1631, majf=0, minf=1 00:16:57.970 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:57.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.970 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:57.970 issued rwts: total=605,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:57.970 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:57.970 00:16:57.970 Run status group 0 (all jobs): 00:16:57.970 READ: bw=4468KiB/s (4575kB/s), 73.5KiB/s-2418KiB/s (75.3kB/s-2476kB/s), io=4620KiB (4731kB), run=1001-1034msec 00:16:57.970 WRITE: bw=10.4MiB/s (10.9MB/s), 1981KiB/s-4092KiB/s (2028kB/s-4190kB/s), io=10.7MiB (11.2MB), run=1001-1034msec 00:16:57.970 00:16:57.970 Disk stats (read/write): 00:16:57.970 nvme0n1: ios=63/512, merge=0/0, ticks=892/232, in_queue=1124, util=84.27% 00:16:57.970 nvme0n2: ios=54/512, merge=0/0, ticks=796/218, in_queue=1014, util=90.82% 00:16:57.970 nvme0n3: ios=519/512, merge=0/0, ticks=568/316, in_queue=884, util=95.14% 00:16:57.970 nvme0n4: ios=534/835, merge=0/0, ticks=1297/376, in_queue=1673, util=94.22% 00:16:57.970 20:48:22 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:57.970 [global] 00:16:57.970 thread=1 00:16:57.970 invalidate=1 00:16:57.970 rw=randwrite 00:16:57.970 time_based=1 00:16:57.970 runtime=1 00:16:57.970 ioengine=libaio 00:16:57.970 direct=1 00:16:57.970 bs=4096 00:16:57.970 iodepth=1 00:16:57.970 norandommap=0 00:16:57.970 numjobs=1 00:16:57.970 00:16:57.970 verify_dump=1 00:16:57.970 verify_backlog=512 00:16:57.970 verify_state_save=0 00:16:57.970 do_verify=1 00:16:57.970 verify=crc32c-intel 00:16:57.970 [job0] 00:16:57.970 filename=/dev/nvme0n1 00:16:57.970 [job1] 00:16:57.970 filename=/dev/nvme0n2 00:16:57.970 [job2] 00:16:57.970 filename=/dev/nvme0n3 00:16:57.970 [job3] 00:16:57.970 filename=/dev/nvme0n4 00:16:57.970 Could not set queue depth (nvme0n1) 00:16:57.970 Could not set queue depth (nvme0n2) 00:16:57.970 Could not set queue depth (nvme0n3) 00:16:57.970 Could not set queue depth (nvme0n4) 00:16:58.263 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.263 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.263 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.263 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:58.263 fio-3.35 00:16:58.263 Starting 4 threads 00:16:59.650 00:16:59.650 job0: (groupid=0, jobs=1): err= 0: pid=2771268: Wed Apr 24 20:48:23 2024 00:16:59.650 read: IOPS=17, BW=70.0KiB/s (71.7kB/s)(72.0KiB/1029msec) 00:16:59.650 slat (nsec): min=26307, max=27170, avg=26669.33, stdev=218.01 00:16:59.650 clat (usec): min=724, max=42550, avg=39373.11, stdev=9659.85 00:16:59.650 lat (usec): min=751, max=42577, avg=39399.78, stdev=9659.84 00:16:59.650 clat percentiles (usec): 00:16:59.650 | 1.00th=[ 725], 5.00th=[ 725], 10.00th=[41157], 20.00th=[41157], 00:16:59.650 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:59.650 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:16:59.650 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:59.650 | 99.99th=[42730] 00:16:59.650 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:16:59.650 slat (nsec): min=8969, max=58917, avg=31192.77, stdev=8326.46 00:16:59.650 clat (usec): min=196, max=1687, avg=584.28, stdev=139.27 00:16:59.650 lat (usec): min=230, max=1720, avg=615.48, stdev=141.87 00:16:59.650 clat percentiles (usec): 00:16:59.650 | 1.00th=[ 281], 5.00th=[ 351], 10.00th=[ 416], 20.00th=[ 469], 00:16:59.650 | 30.00th=[ 510], 40.00th=[ 553], 50.00th=[ 594], 60.00th=[ 627], 00:16:59.650 | 70.00th=[ 660], 80.00th=[ 709], 90.00th=[ 742], 95.00th=[ 783], 00:16:59.650 | 99.00th=[ 832], 99.50th=[ 857], 99.90th=[ 1696], 99.95th=[ 1696], 00:16:59.650 | 99.99th=[ 1696] 00:16:59.650 bw ( KiB/s): min= 4087, max= 4087, per=51.69%, avg=4087.00, stdev= 0.00, samples=1 00:16:59.650 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:16:59.650 lat (usec) : 250=0.38%, 500=27.55%, 750=60.19%, 1000=8.49% 00:16:59.650 lat (msec) : 2=0.19%, 50=3.21% 00:16:59.650 cpu : usr=1.07%, sys=2.04%, ctx=532, majf=0, minf=1 00:16:59.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.650 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.650 job1: (groupid=0, jobs=1): err= 0: pid=2771281: Wed Apr 24 20:48:23 2024 00:16:59.650 read: IOPS=16, BW=65.6KiB/s (67.2kB/s)(68.0KiB/1036msec) 00:16:59.650 slat (nsec): min=25489, max=30088, avg=25990.00, stdev=1077.74 00:16:59.650 clat (usec): min=41157, max=42983, avg=42028.64, stdev=401.71 00:16:59.650 lat (usec): min=41187, max=43008, avg=42054.63, stdev=401.11 00:16:59.650 clat percentiles (usec): 00:16:59.650 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:16:59.650 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:16:59.650 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:16:59.650 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:59.650 | 99.99th=[42730] 00:16:59.650 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:16:59.650 slat (nsec): min=8600, max=59002, avg=28693.14, stdev=9811.02 00:16:59.650 clat (usec): min=296, max=949, avg=589.71, stdev=112.46 00:16:59.650 lat (usec): min=306, max=985, avg=618.40, stdev=116.83 00:16:59.650 clat percentiles (usec): 00:16:59.650 | 1.00th=[ 330], 5.00th=[ 392], 10.00th=[ 433], 20.00th=[ 494], 00:16:59.650 | 30.00th=[ 537], 40.00th=[ 570], 50.00th=[ 594], 60.00th=[ 627], 00:16:59.650 | 70.00th=[ 652], 80.00th=[ 685], 90.00th=[ 734], 95.00th=[ 766], 00:16:59.650 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 947], 99.95th=[ 947], 00:16:59.650 | 99.99th=[ 947] 00:16:59.650 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:16:59.650 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:59.650 lat (usec) : 500=20.04%, 750=70.70%, 1000=6.05% 00:16:59.650 lat (msec) : 50=3.21% 00:16:59.650 cpu : usr=1.45%, sys=1.45%, ctx=530, majf=0, minf=1 00:16:59.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.650 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.650 job2: (groupid=0, jobs=1): err= 0: pid=2771298: Wed Apr 24 20:48:23 2024 00:16:59.650 read: IOPS=15, BW=62.7KiB/s (64.2kB/s)(64.0KiB/1021msec) 00:16:59.650 slat (nsec): min=24584, max=25067, avg=24824.12, stdev=142.32 00:16:59.650 clat (usec): min=41875, max=42961, avg=42154.43, stdev=388.59 00:16:59.650 lat (usec): min=41900, max=42986, avg=42179.26, stdev=388.60 00:16:59.650 clat percentiles (usec): 00:16:59.650 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:16:59.650 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:59.650 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:16:59.650 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:59.650 | 99.99th=[42730] 00:16:59.650 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:16:59.650 slat (nsec): min=9353, max=76429, avg=28026.32, stdev=9503.54 00:16:59.650 clat (usec): min=308, max=1020, avg=640.62, stdev=130.59 00:16:59.650 lat (usec): min=329, max=1057, avg=668.65, stdev=133.31 00:16:59.650 clat percentiles (usec): 00:16:59.650 | 1.00th=[ 367], 5.00th=[ 416], 10.00th=[ 465], 20.00th=[ 529], 00:16:59.650 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:16:59.650 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 799], 95.00th=[ 873], 00:16:59.650 | 99.00th=[ 947], 99.50th=[ 988], 99.90th=[ 1020], 99.95th=[ 1020], 00:16:59.650 | 99.99th=[ 1020] 00:16:59.650 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:16:59.650 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:59.650 lat (usec) : 500=13.83%, 750=64.58%, 1000=18.18% 00:16:59.650 lat (msec) : 2=0.38%, 50=3.03% 00:16:59.650 cpu : usr=0.98%, sys=0.98%, ctx=529, majf=0, minf=1 00:16:59.650 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.650 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.650 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.650 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.650 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.650 job3: (groupid=0, jobs=1): err= 0: pid=2771305: Wed Apr 24 20:48:23 2024 00:16:59.650 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1008msec) 00:16:59.650 slat (nsec): min=25523, max=26213, avg=25784.41, stdev=157.96 00:16:59.650 clat (usec): min=40923, max=42299, avg=41515.71, stdev=545.02 00:16:59.650 lat (usec): min=40949, max=42325, avg=41541.49, stdev=544.99 00:16:59.650 clat percentiles (usec): 00:16:59.650 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:16:59.650 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:16:59.650 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:59.650 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:59.650 | 99.99th=[42206] 00:16:59.650 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:16:59.650 slat (nsec): min=8457, max=81004, avg=26183.95, stdev=10750.81 00:16:59.650 clat (usec): min=234, max=825, avg=555.40, stdev=124.40 00:16:59.650 lat (usec): min=243, max=858, avg=581.58, stdev=129.96 00:16:59.650 clat percentiles (usec): 00:16:59.651 | 1.00th=[ 265], 5.00th=[ 334], 10.00th=[ 371], 20.00th=[ 441], 00:16:59.651 | 30.00th=[ 498], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 594], 00:16:59.651 | 70.00th=[ 635], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 734], 00:16:59.651 | 99.00th=[ 791], 99.50th=[ 807], 99.90th=[ 824], 99.95th=[ 824], 00:16:59.651 | 99.99th=[ 824] 00:16:59.651 bw ( KiB/s): min= 4096, max= 4096, per=51.80%, avg=4096.00, stdev= 0.00, samples=1 00:16:59.651 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:59.651 lat (usec) : 250=0.19%, 500=29.49%, 750=64.46%, 1000=2.65% 00:16:59.651 lat (msec) : 50=3.21% 00:16:59.651 cpu : usr=1.39%, sys=1.29%, ctx=530, majf=0, minf=1 00:16:59.651 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:59.651 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.651 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:59.651 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:59.651 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:59.651 00:16:59.651 Run status group 0 (all jobs): 00:16:59.651 READ: bw=263KiB/s (269kB/s), 62.7KiB/s-70.0KiB/s (64.2kB/s-71.7kB/s), io=272KiB (279kB), run=1008-1036msec 00:16:59.651 WRITE: bw=7907KiB/s (8097kB/s), 1977KiB/s-2032KiB/s (2024kB/s-2081kB/s), io=8192KiB (8389kB), run=1008-1036msec 00:16:59.651 00:16:59.651 Disk stats (read/write): 00:16:59.651 nvme0n1: ios=50/512, merge=0/0, ticks=1014/223, in_queue=1237, util=96.39% 00:16:59.651 nvme0n2: ios=61/512, merge=0/0, ticks=569/252, in_queue=821, util=89.30% 00:16:59.651 nvme0n3: ios=11/512, merge=0/0, ticks=464/316, in_queue=780, util=88.41% 00:16:59.651 nvme0n4: ios=54/512, merge=0/0, ticks=557/231, in_queue=788, util=91.79% 00:16:59.651 20:48:23 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:59.651 [global] 00:16:59.651 thread=1 00:16:59.651 invalidate=1 00:16:59.651 rw=write 00:16:59.651 time_based=1 00:16:59.651 runtime=1 00:16:59.651 ioengine=libaio 00:16:59.651 direct=1 00:16:59.651 bs=4096 00:16:59.651 iodepth=128 00:16:59.651 norandommap=0 00:16:59.651 numjobs=1 00:16:59.651 00:16:59.651 verify_dump=1 00:16:59.651 verify_backlog=512 00:16:59.651 verify_state_save=0 00:16:59.651 do_verify=1 00:16:59.651 verify=crc32c-intel 00:16:59.651 [job0] 00:16:59.651 filename=/dev/nvme0n1 00:16:59.651 [job1] 00:16:59.651 filename=/dev/nvme0n2 00:16:59.651 [job2] 00:16:59.651 filename=/dev/nvme0n3 00:16:59.651 [job3] 00:16:59.651 filename=/dev/nvme0n4 00:16:59.651 Could not set queue depth (nvme0n1) 00:16:59.651 Could not set queue depth (nvme0n2) 00:16:59.651 Could not set queue depth (nvme0n3) 00:16:59.651 Could not set queue depth (nvme0n4) 00:16:59.911 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.911 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.911 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.911 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:59.911 fio-3.35 00:16:59.911 Starting 4 threads 00:17:00.942 00:17:00.942 job0: (groupid=0, jobs=1): err= 0: pid=2771729: Wed Apr 24 20:48:25 2024 00:17:00.942 read: IOPS=3688, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1004msec) 00:17:00.942 slat (nsec): min=1313, max=31306k, avg=129479.34, stdev=1096692.48 00:17:00.942 clat (usec): min=2917, max=49798, avg=16755.51, stdev=7649.07 00:17:00.942 lat (usec): min=2924, max=58020, avg=16884.99, stdev=7737.57 00:17:00.942 clat percentiles (usec): 00:17:00.942 | 1.00th=[ 4146], 5.00th=[ 7832], 10.00th=[ 8586], 20.00th=[ 9765], 00:17:00.942 | 30.00th=[12125], 40.00th=[14091], 50.00th=[14877], 60.00th=[16909], 00:17:00.942 | 70.00th=[20579], 80.00th=[22414], 90.00th=[28443], 95.00th=[32637], 00:17:00.942 | 99.00th=[39584], 99.50th=[46924], 99.90th=[47449], 99.95th=[47449], 00:17:00.942 | 99.99th=[49546] 00:17:00.942 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:17:00.942 slat (usec): min=2, max=18622, avg=113.26, stdev=832.18 00:17:00.942 clat (usec): min=1261, max=53457, avg=15922.30, stdev=8634.10 00:17:00.942 lat (usec): min=1270, max=53465, avg=16035.56, stdev=8697.35 00:17:00.942 clat percentiles (usec): 00:17:00.942 | 1.00th=[ 4293], 5.00th=[ 7308], 10.00th=[ 7767], 20.00th=[ 8717], 00:17:00.942 | 30.00th=[10421], 40.00th=[11600], 50.00th=[12911], 60.00th=[15664], 00:17:00.942 | 70.00th=[19268], 80.00th=[22414], 90.00th=[27919], 95.00th=[33162], 00:17:00.942 | 99.00th=[47973], 99.50th=[51643], 99.90th=[53216], 99.95th=[53216], 00:17:00.942 | 99.99th=[53216] 00:17:00.942 bw ( KiB/s): min=12288, max=20416, per=22.81%, avg=16352.00, stdev=5747.36, samples=2 00:17:00.942 iops : min= 3072, max= 5104, avg=4088.00, stdev=1436.84, samples=2 00:17:00.942 lat (msec) : 2=0.10%, 4=0.73%, 10=22.91%, 20=46.06%, 50=29.80% 00:17:00.942 lat (msec) : 100=0.40% 00:17:00.942 cpu : usr=3.09%, sys=3.89%, ctx=326, majf=0, minf=2 00:17:00.942 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:00.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.942 issued rwts: total=3703,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.942 job1: (groupid=0, jobs=1): err= 0: pid=2771736: Wed Apr 24 20:48:25 2024 00:17:00.942 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:17:00.942 slat (nsec): min=1307, max=10212k, avg=99153.15, stdev=748079.95 00:17:00.942 clat (usec): min=3781, max=37269, avg=11754.49, stdev=3491.13 00:17:00.942 lat (usec): min=3786, max=37272, avg=11853.64, stdev=3543.60 00:17:00.942 clat percentiles (usec): 00:17:00.942 | 1.00th=[ 4424], 5.00th=[ 7308], 10.00th=[ 9503], 20.00th=[10028], 00:17:00.942 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10945], 60.00th=[11207], 00:17:00.942 | 70.00th=[11863], 80.00th=[13304], 90.00th=[15926], 95.00th=[17957], 00:17:00.942 | 99.00th=[26346], 99.50th=[32637], 99.90th=[34341], 99.95th=[37487], 00:17:00.942 | 99.99th=[37487] 00:17:00.942 write: IOPS=5238, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1009msec); 0 zone resets 00:17:00.942 slat (usec): min=2, max=8011, avg=88.40, stdev=405.56 00:17:00.942 clat (usec): min=1141, max=49862, avg=12823.82, stdev=8166.10 00:17:00.942 lat (usec): min=1152, max=49870, avg=12912.22, stdev=8224.34 00:17:00.942 clat percentiles (usec): 00:17:00.942 | 1.00th=[ 3064], 5.00th=[ 4817], 10.00th=[ 6325], 20.00th=[ 8586], 00:17:00.942 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:17:00.942 | 70.00th=[10945], 80.00th=[13960], 90.00th=[23725], 95.00th=[31327], 00:17:00.942 | 99.00th=[44827], 99.50th=[47449], 99.90th=[50070], 99.95th=[50070], 00:17:00.942 | 99.99th=[50070] 00:17:00.942 bw ( KiB/s): min=16696, max=24576, per=28.78%, avg=20636.00, stdev=5572.00, samples=2 00:17:00.942 iops : min= 4174, max= 6144, avg=5159.00, stdev=1393.00, samples=2 00:17:00.942 lat (msec) : 2=0.09%, 4=1.59%, 10=23.08%, 20=67.39%, 50=7.85% 00:17:00.942 cpu : usr=4.17%, sys=4.56%, ctx=677, majf=0, minf=1 00:17:00.942 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:00.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.943 issued rwts: total=5120,5286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.943 job2: (groupid=0, jobs=1): err= 0: pid=2771742: Wed Apr 24 20:48:25 2024 00:17:00.943 read: IOPS=4451, BW=17.4MiB/s (18.2MB/s)(17.5MiB/1005msec) 00:17:00.943 slat (nsec): min=1374, max=12184k, avg=106917.88, stdev=803110.96 00:17:00.943 clat (usec): min=2663, max=26068, avg=13013.85, stdev=3258.11 00:17:00.943 lat (usec): min=4393, max=26097, avg=13120.77, stdev=3315.91 00:17:00.943 clat percentiles (usec): 00:17:00.943 | 1.00th=[ 6128], 5.00th=[ 8717], 10.00th=[10159], 20.00th=[11207], 00:17:00.943 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:17:00.943 | 70.00th=[13173], 80.00th=[15139], 90.00th=[17433], 95.00th=[20055], 00:17:00.943 | 99.00th=[22938], 99.50th=[23462], 99.90th=[24773], 99.95th=[24773], 00:17:00.943 | 99.99th=[26084] 00:17:00.943 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:17:00.943 slat (usec): min=2, max=18138, avg=108.26, stdev=709.27 00:17:00.943 clat (usec): min=1198, max=77912, avg=14523.72, stdev=12734.94 00:17:00.943 lat (usec): min=1216, max=77914, avg=14631.98, stdev=12802.45 00:17:00.943 clat percentiles (usec): 00:17:00.943 | 1.00th=[ 3785], 5.00th=[ 6194], 10.00th=[ 7439], 20.00th=[10159], 00:17:00.943 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:17:00.943 | 70.00th=[12387], 80.00th=[13304], 90.00th=[17433], 95.00th=[47973], 00:17:00.943 | 99.00th=[70779], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119], 00:17:00.943 | 99.99th=[78119] 00:17:00.943 bw ( KiB/s): min=15632, max=21232, per=25.71%, avg=18432.00, stdev=3959.80, samples=2 00:17:00.943 iops : min= 3908, max= 5308, avg=4608.00, stdev=989.95, samples=2 00:17:00.943 lat (msec) : 2=0.02%, 4=0.53%, 10=13.72%, 20=78.42%, 50=4.95% 00:17:00.943 lat (msec) : 100=2.36% 00:17:00.943 cpu : usr=3.29%, sys=5.08%, ctx=476, majf=0, minf=1 00:17:00.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:00.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.943 issued rwts: total=4474,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.943 job3: (groupid=0, jobs=1): err= 0: pid=2771749: Wed Apr 24 20:48:25 2024 00:17:00.943 read: IOPS=3989, BW=15.6MiB/s (16.3MB/s)(15.7MiB/1007msec) 00:17:00.943 slat (nsec): min=1393, max=21302k, avg=142906.25, stdev=1226921.18 00:17:00.943 clat (usec): min=4502, max=44923, avg=17401.49, stdev=6904.04 00:17:00.943 lat (usec): min=4509, max=52793, avg=17544.39, stdev=7001.95 00:17:00.943 clat percentiles (usec): 00:17:00.943 | 1.00th=[ 5669], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[11207], 00:17:00.943 | 30.00th=[11994], 40.00th=[14484], 50.00th=[15533], 60.00th=[19530], 00:17:00.943 | 70.00th=[21103], 80.00th=[22676], 90.00th=[26084], 95.00th=[30802], 00:17:00.943 | 99.00th=[37487], 99.50th=[37487], 99.90th=[39584], 99.95th=[42206], 00:17:00.943 | 99.99th=[44827] 00:17:00.943 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:17:00.943 slat (usec): min=2, max=19208, avg=98.52, stdev=713.79 00:17:00.943 clat (usec): min=1181, max=39631, avg=14078.40, stdev=5093.58 00:17:00.943 lat (usec): min=1191, max=39634, avg=14176.92, stdev=5165.26 00:17:00.943 clat percentiles (usec): 00:17:00.943 | 1.00th=[ 4293], 5.00th=[ 6521], 10.00th=[ 8455], 20.00th=[11469], 00:17:00.943 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:17:00.943 | 70.00th=[16581], 80.00th=[18744], 90.00th=[21627], 95.00th=[22414], 00:17:00.943 | 99.00th=[32113], 99.50th=[32113], 99.90th=[35914], 99.95th=[38536], 00:17:00.943 | 99.99th=[39584] 00:17:00.943 bw ( KiB/s): min=12288, max=20480, per=22.85%, avg=16384.00, stdev=5792.62, samples=2 00:17:00.943 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:17:00.943 lat (msec) : 2=0.02%, 4=0.21%, 10=10.97%, 20=62.30%, 50=26.50% 00:17:00.943 cpu : usr=3.48%, sys=3.98%, ctx=457, majf=0, minf=1 00:17:00.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:00.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.943 issued rwts: total=4017,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.943 00:17:00.943 Run status group 0 (all jobs): 00:17:00.943 READ: bw=67.0MiB/s (70.3MB/s), 14.4MiB/s-19.8MiB/s (15.1MB/s-20.8MB/s), io=67.6MiB (70.9MB), run=1004-1009msec 00:17:00.943 WRITE: bw=70.0MiB/s (73.4MB/s), 15.9MiB/s-20.5MiB/s (16.7MB/s-21.5MB/s), io=70.6MiB (74.1MB), run=1004-1009msec 00:17:00.943 00:17:00.943 Disk stats (read/write): 00:17:00.943 nvme0n1: ios=3082/3079, merge=0/0, ticks=48458/44650, in_queue=93108, util=84.37% 00:17:00.943 nvme0n2: ios=4148/4591, merge=0/0, ticks=44819/57102, in_queue=101921, util=88.80% 00:17:00.943 nvme0n3: ios=3644/3855, merge=0/0, ticks=44208/56498, in_queue=100706, util=92.83% 00:17:00.943 nvme0n4: ios=3124/3359, merge=0/0, ticks=55170/47744, in_queue=102914, util=96.70% 00:17:00.943 20:48:25 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:00.943 [global] 00:17:00.943 thread=1 00:17:00.943 invalidate=1 00:17:00.943 rw=randwrite 00:17:00.943 time_based=1 00:17:00.943 runtime=1 00:17:00.943 ioengine=libaio 00:17:00.943 direct=1 00:17:00.943 bs=4096 00:17:00.943 iodepth=128 00:17:00.943 norandommap=0 00:17:00.943 numjobs=1 00:17:00.943 00:17:00.943 verify_dump=1 00:17:00.943 verify_backlog=512 00:17:00.943 verify_state_save=0 00:17:00.943 do_verify=1 00:17:00.943 verify=crc32c-intel 00:17:00.943 [job0] 00:17:00.943 filename=/dev/nvme0n1 00:17:00.943 [job1] 00:17:00.943 filename=/dev/nvme0n2 00:17:00.943 [job2] 00:17:00.943 filename=/dev/nvme0n3 00:17:00.943 [job3] 00:17:00.943 filename=/dev/nvme0n4 00:17:01.219 Could not set queue depth (nvme0n1) 00:17:01.219 Could not set queue depth (nvme0n2) 00:17:01.219 Could not set queue depth (nvme0n3) 00:17:01.219 Could not set queue depth (nvme0n4) 00:17:01.480 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:01.480 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:01.480 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:01.480 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:01.480 fio-3.35 00:17:01.480 Starting 4 threads 00:17:02.878 00:17:02.878 job0: (groupid=0, jobs=1): err= 0: pid=2772245: Wed Apr 24 20:48:27 2024 00:17:02.878 read: IOPS=5820, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1009msec) 00:17:02.878 slat (nsec): min=1283, max=21010k, avg=86000.83, stdev=684692.57 00:17:02.878 clat (usec): min=2260, max=41261, avg=11408.24, stdev=3942.18 00:17:02.878 lat (usec): min=2966, max=41286, avg=11494.24, stdev=3989.09 00:17:02.878 clat percentiles (usec): 00:17:02.878 | 1.00th=[ 4228], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9110], 00:17:02.878 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:17:02.878 | 70.00th=[11338], 80.00th=[13304], 90.00th=[16712], 95.00th=[18482], 00:17:02.878 | 99.00th=[29492], 99.50th=[29492], 99.90th=[29492], 99.95th=[29754], 00:17:02.878 | 99.99th=[41157] 00:17:02.878 write: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec); 0 zone resets 00:17:02.878 slat (usec): min=2, max=14692, avg=71.01, stdev=485.56 00:17:02.878 clat (usec): min=1275, max=40775, avg=9935.11, stdev=2926.33 00:17:02.878 lat (usec): min=1381, max=40799, avg=10006.13, stdev=2982.72 00:17:02.878 clat percentiles (usec): 00:17:02.878 | 1.00th=[ 3556], 5.00th=[ 5473], 10.00th=[ 6718], 20.00th=[ 8160], 00:17:02.878 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10290], 00:17:02.878 | 70.00th=[10290], 80.00th=[10421], 90.00th=[11076], 95.00th=[16057], 00:17:02.878 | 99.00th=[20055], 99.50th=[20055], 99.90th=[26084], 99.95th=[26084], 00:17:02.878 | 99.99th=[40633] 00:17:02.878 bw ( KiB/s): min=24560, max=24592, per=32.02%, avg=24576.00, stdev=22.63, samples=2 00:17:02.878 iops : min= 6140, max= 6148, avg=6144.00, stdev= 5.66, samples=2 00:17:02.878 lat (msec) : 2=0.01%, 4=1.32%, 10=41.59%, 20=54.56%, 50=2.52% 00:17:02.878 cpu : usr=3.67%, sys=7.54%, ctx=640, majf=0, minf=1 00:17:02.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:02.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:02.878 issued rwts: total=5873,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:02.878 job1: (groupid=0, jobs=1): err= 0: pid=2772254: Wed Apr 24 20:48:27 2024 00:17:02.878 read: IOPS=7461, BW=29.1MiB/s (30.6MB/s)(29.2MiB/1003msec) 00:17:02.878 slat (nsec): min=1233, max=30066k, avg=68495.27, stdev=663528.41 00:17:02.878 clat (usec): min=1378, max=76806, avg=9091.38, stdev=6805.45 00:17:02.878 lat (usec): min=2201, max=76829, avg=9159.88, stdev=6861.81 00:17:02.878 clat percentiles (usec): 00:17:02.878 | 1.00th=[ 3195], 5.00th=[ 4555], 10.00th=[ 5997], 20.00th=[ 6325], 00:17:02.878 | 30.00th=[ 6718], 40.00th=[ 6980], 50.00th=[ 7308], 60.00th=[ 8094], 00:17:02.878 | 70.00th=[ 8848], 80.00th=[10290], 90.00th=[11863], 95.00th=[12780], 00:17:02.878 | 99.00th=[46924], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:17:02.878 | 99.99th=[77071] 00:17:02.878 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:17:02.878 slat (usec): min=2, max=27478, avg=56.98, stdev=497.98 00:17:02.878 clat (usec): min=979, max=34248, avg=7660.06, stdev=4593.53 00:17:02.878 lat (usec): min=988, max=34633, avg=7717.04, stdev=4616.62 00:17:02.878 clat percentiles (usec): 00:17:02.878 | 1.00th=[ 2474], 5.00th=[ 3916], 10.00th=[ 4424], 20.00th=[ 5407], 00:17:02.878 | 30.00th=[ 6063], 40.00th=[ 6521], 50.00th=[ 6783], 60.00th=[ 6915], 00:17:02.878 | 70.00th=[ 7111], 80.00th=[ 9110], 90.00th=[10552], 95.00th=[11731], 00:17:02.878 | 99.00th=[32113], 99.50th=[32637], 99.90th=[34341], 99.95th=[34341], 00:17:02.878 | 99.99th=[34341] 00:17:02.878 bw ( KiB/s): min=28672, max=32768, per=40.03%, avg=30720.00, stdev=2896.31, samples=2 00:17:02.878 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:17:02.878 lat (usec) : 1000=0.01% 00:17:02.878 lat (msec) : 2=0.20%, 4=4.52%, 10=77.55%, 20=14.37%, 50=3.32% 00:17:02.878 lat (msec) : 100=0.02% 00:17:02.878 cpu : usr=5.69%, sys=6.29%, ctx=680, majf=0, minf=1 00:17:02.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:17:02.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:02.878 issued rwts: total=7484,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:02.878 job2: (groupid=0, jobs=1): err= 0: pid=2772264: Wed Apr 24 20:48:27 2024 00:17:02.878 read: IOPS=2324, BW=9299KiB/s (9522kB/s)(9420KiB/1013msec) 00:17:02.878 slat (nsec): min=1253, max=15099k, avg=153683.74, stdev=889494.77 00:17:02.878 clat (usec): min=8451, max=55988, avg=19637.19, stdev=8152.96 00:17:02.878 lat (usec): min=10851, max=58628, avg=19790.87, stdev=8216.75 00:17:02.878 clat percentiles (usec): 00:17:02.878 | 1.00th=[12125], 5.00th=[13566], 10.00th=[14746], 20.00th=[15664], 00:17:02.878 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16450], 60.00th=[16909], 00:17:02.878 | 70.00th=[17695], 80.00th=[21365], 90.00th=[26608], 95.00th=[43254], 00:17:02.878 | 99.00th=[47449], 99.50th=[53216], 99.90th=[55837], 99.95th=[55837], 00:17:02.878 | 99.99th=[55837] 00:17:02.878 write: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec); 0 zone resets 00:17:02.878 slat (usec): min=2, max=35301, avg=245.81, stdev=1420.24 00:17:02.878 clat (usec): min=9382, max=84701, avg=31972.38, stdev=13692.58 00:17:02.878 lat (usec): min=9390, max=84732, avg=32218.20, stdev=13772.99 00:17:02.878 clat percentiles (usec): 00:17:02.878 | 1.00th=[13829], 5.00th=[19006], 10.00th=[21627], 20.00th=[22676], 00:17:02.878 | 30.00th=[23200], 40.00th=[23200], 50.00th=[23987], 60.00th=[28967], 00:17:02.878 | 70.00th=[33817], 80.00th=[42730], 90.00th=[57410], 95.00th=[61604], 00:17:02.878 | 99.00th=[64226], 99.50th=[64226], 99.90th=[69731], 99.95th=[73925], 00:17:02.878 | 99.99th=[84411] 00:17:02.878 bw ( KiB/s): min= 9784, max=10696, per=13.34%, avg=10240.00, stdev=644.88, samples=2 00:17:02.878 iops : min= 2446, max= 2674, avg=2560.00, stdev=161.22, samples=2 00:17:02.878 lat (msec) : 10=0.16%, 20=39.45%, 50=51.41%, 100=8.97% 00:17:02.878 cpu : usr=1.78%, sys=2.47%, ctx=351, majf=0, minf=1 00:17:02.878 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:17:02.878 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:02.878 issued rwts: total=2355,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:02.878 job3: (groupid=0, jobs=1): err= 0: pid=2772268: Wed Apr 24 20:48:27 2024 00:17:02.879 read: IOPS=3010, BW=11.8MiB/s (12.3MB/s)(11.9MiB/1014msec) 00:17:02.879 slat (nsec): min=1366, max=17698k, avg=161097.24, stdev=1130330.87 00:17:02.879 clat (usec): min=6274, max=56932, avg=17481.85, stdev=7379.08 00:17:02.879 lat (usec): min=6278, max=56941, avg=17642.95, stdev=7476.10 00:17:02.879 clat percentiles (usec): 00:17:02.879 | 1.00th=[ 6587], 5.00th=[13042], 10.00th=[13173], 20.00th=[13435], 00:17:02.879 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13829], 60.00th=[14222], 00:17:02.879 | 70.00th=[18482], 80.00th=[21890], 90.00th=[26084], 95.00th=[33817], 00:17:02.879 | 99.00th=[44827], 99.50th=[48497], 99.90th=[56886], 99.95th=[56886], 00:17:02.879 | 99.99th=[56886] 00:17:02.879 write: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec); 0 zone resets 00:17:02.879 slat (usec): min=2, max=19674, avg=162.03, stdev=744.52 00:17:02.879 clat (usec): min=1220, max=56939, avg=24474.63, stdev=9766.43 00:17:02.879 lat (usec): min=1227, max=56947, avg=24636.66, stdev=9847.53 00:17:02.879 clat percentiles (usec): 00:17:02.879 | 1.00th=[ 3916], 5.00th=[ 9372], 10.00th=[11076], 20.00th=[16909], 00:17:02.879 | 30.00th=[21365], 40.00th=[22676], 50.00th=[22938], 60.00th=[23200], 00:17:02.879 | 70.00th=[26870], 80.00th=[34341], 90.00th=[40109], 95.00th=[41681], 00:17:02.879 | 99.00th=[45351], 99.50th=[45876], 99.90th=[46400], 99.95th=[56886], 00:17:02.879 | 99.99th=[56886] 00:17:02.879 bw ( KiB/s): min=12288, max=12288, per=16.01%, avg=12288.00, stdev= 0.00, samples=2 00:17:02.879 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:17:02.879 lat (msec) : 2=0.15%, 4=0.39%, 10=3.53%, 20=47.49%, 50=48.20% 00:17:02.879 lat (msec) : 100=0.24% 00:17:02.879 cpu : usr=1.58%, sys=3.16%, ctx=379, majf=0, minf=1 00:17:02.879 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:17:02.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:02.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:02.879 issued rwts: total=3053,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:02.879 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:02.879 00:17:02.879 Run status group 0 (all jobs): 00:17:02.879 READ: bw=72.3MiB/s (75.8MB/s), 9299KiB/s-29.1MiB/s (9522kB/s-30.6MB/s), io=73.3MiB (76.9MB), run=1003-1014msec 00:17:02.879 WRITE: bw=75.0MiB/s (78.6MB/s), 9.87MiB/s-29.9MiB/s (10.4MB/s-31.4MB/s), io=76.0MiB (79.7MB), run=1003-1014msec 00:17:02.879 00:17:02.879 Disk stats (read/write): 00:17:02.879 nvme0n1: ios=4767/5120, merge=0/0, ticks=52320/49826, in_queue=102146, util=88.08% 00:17:02.879 nvme0n2: ios=6180/6443, merge=0/0, ticks=42627/38114, in_queue=80741, util=87.96% 00:17:02.879 nvme0n3: ios=2063/2063, merge=0/0, ticks=17849/33470, in_queue=51319, util=91.87% 00:17:02.879 nvme0n4: ios=2333/2560, merge=0/0, ticks=39563/63267, in_queue=102830, util=95.62% 00:17:02.879 20:48:27 -- target/fio.sh@55 -- # sync 00:17:02.879 20:48:27 -- target/fio.sh@59 -- # fio_pid=2772574 00:17:02.879 20:48:27 -- target/fio.sh@61 -- # sleep 3 00:17:02.879 20:48:27 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:02.879 [global] 00:17:02.879 thread=1 00:17:02.879 invalidate=1 00:17:02.879 rw=read 00:17:02.879 time_based=1 00:17:02.879 runtime=10 00:17:02.879 ioengine=libaio 00:17:02.879 direct=1 00:17:02.879 bs=4096 00:17:02.879 iodepth=1 00:17:02.879 norandommap=1 00:17:02.879 numjobs=1 00:17:02.879 00:17:02.879 [job0] 00:17:02.879 filename=/dev/nvme0n1 00:17:02.879 [job1] 00:17:02.879 filename=/dev/nvme0n2 00:17:02.879 [job2] 00:17:02.879 filename=/dev/nvme0n3 00:17:02.879 [job3] 00:17:02.879 filename=/dev/nvme0n4 00:17:02.879 Could not set queue depth (nvme0n1) 00:17:02.879 Could not set queue depth (nvme0n2) 00:17:02.879 Could not set queue depth (nvme0n3) 00:17:02.879 Could not set queue depth (nvme0n4) 00:17:03.143 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:03.143 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:03.143 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:03.143 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:03.143 fio-3.35 00:17:03.143 Starting 4 threads 00:17:05.685 20:48:30 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:05.945 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=10117120, buflen=4096 00:17:05.945 fio: pid=2772775, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:05.945 20:48:30 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:06.206 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=282624, buflen=4096 00:17:06.206 fio: pid=2772771, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:06.206 20:48:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:06.206 20:48:30 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:06.466 20:48:30 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:06.466 20:48:30 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:06.466 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=2822144, buflen=4096 00:17:06.466 fio: pid=2772765, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:06.466 20:48:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:06.466 20:48:31 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:06.466 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=8605696, buflen=4096 00:17:06.466 fio: pid=2772766, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:06.466 00:17:06.466 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2772765: Wed Apr 24 20:48:31 2024 00:17:06.466 read: IOPS=225, BW=902KiB/s (924kB/s)(2756KiB/3054msec) 00:17:06.466 slat (usec): min=6, max=19790, avg=81.13, stdev=975.28 00:17:06.466 clat (usec): min=451, max=42851, avg=4314.44, stdev=11481.80 00:17:06.466 lat (usec): min=475, max=56727, avg=4395.65, stdev=11612.54 00:17:06.466 clat percentiles (usec): 00:17:06.466 | 1.00th=[ 506], 5.00th=[ 644], 10.00th=[ 676], 20.00th=[ 725], 00:17:06.466 | 30.00th=[ 750], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 807], 00:17:06.466 | 70.00th=[ 824], 80.00th=[ 840], 90.00th=[ 922], 95.00th=[41157], 00:17:06.466 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:17:06.466 | 99.99th=[42730] 00:17:06.466 bw ( KiB/s): min= 96, max= 2984, per=16.46%, avg=1081.60, stdev=1379.55, samples=5 00:17:06.466 iops : min= 24, max= 746, avg=270.40, stdev=344.89, samples=5 00:17:06.466 lat (usec) : 500=0.72%, 750=28.55%, 1000=61.74% 00:17:06.466 lat (msec) : 2=0.14%, 50=8.70% 00:17:06.466 cpu : usr=0.29%, sys=0.49%, ctx=694, majf=0, minf=1 00:17:06.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.466 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.466 issued rwts: total=690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:06.466 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2772766: Wed Apr 24 20:48:31 2024 00:17:06.466 read: IOPS=647, BW=2590KiB/s (2652kB/s)(8404KiB/3245msec) 00:17:06.466 slat (usec): min=6, max=28800, avg=44.53, stdev=695.72 00:17:06.466 clat (usec): min=485, max=42339, avg=1484.34, stdev=5296.10 00:17:06.466 lat (usec): min=492, max=70967, avg=1528.88, stdev=5514.06 00:17:06.466 clat percentiles (usec): 00:17:06.466 | 1.00th=[ 562], 5.00th=[ 652], 10.00th=[ 685], 20.00th=[ 725], 00:17:06.466 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[ 799], 60.00th=[ 816], 00:17:06.466 | 70.00th=[ 832], 80.00th=[ 840], 90.00th=[ 873], 95.00th=[ 898], 00:17:06.466 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:06.466 | 99.99th=[42206] 00:17:06.466 bw ( KiB/s): min= 88, max= 4968, per=42.52%, avg=2793.33, stdev=2417.53, samples=6 00:17:06.466 iops : min= 22, max= 1242, avg=698.33, stdev=604.38, samples=6 00:17:06.466 lat (usec) : 500=0.19%, 750=25.88%, 1000=71.93% 00:17:06.466 lat (msec) : 2=0.24%, 50=1.71% 00:17:06.466 cpu : usr=0.71%, sys=1.54%, ctx=2105, majf=0, minf=1 00:17:06.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.466 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.466 issued rwts: total=2102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:06.466 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2772771: Wed Apr 24 20:48:31 2024 00:17:06.466 read: IOPS=24, BW=97.6KiB/s (100.0kB/s)(276KiB/2827msec) 00:17:06.466 slat (usec): min=24, max=13684, avg=220.45, stdev=1632.53 00:17:06.466 clat (usec): min=899, max=42848, avg=40428.44, stdev=6898.19 00:17:06.466 lat (usec): min=929, max=55968, avg=40651.72, stdev=7142.53 00:17:06.466 clat percentiles (usec): 00:17:06.466 | 1.00th=[ 898], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:06.466 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:17:06.466 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:06.466 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:06.466 | 99.99th=[42730] 00:17:06.466 bw ( KiB/s): min= 96, max= 104, per=1.51%, avg=99.20, stdev= 4.38, samples=5 00:17:06.466 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:17:06.466 lat (usec) : 1000=2.86% 00:17:06.466 lat (msec) : 50=95.71% 00:17:06.466 cpu : usr=0.00%, sys=0.11%, ctx=71, majf=0, minf=1 00:17:06.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.466 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.466 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:06.466 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2772775: Wed Apr 24 20:48:31 2024 00:17:06.466 read: IOPS=941, BW=3765KiB/s (3856kB/s)(9880KiB/2624msec) 00:17:06.466 slat (nsec): min=7167, max=64967, avg=26099.57, stdev=3201.21 00:17:06.466 clat (usec): min=616, max=1541, avg=1019.02, stdev=78.93 00:17:06.466 lat (usec): min=644, max=1567, avg=1045.12, stdev=78.90 00:17:06.466 clat percentiles (usec): 00:17:06.466 | 1.00th=[ 791], 5.00th=[ 881], 10.00th=[ 922], 20.00th=[ 955], 00:17:06.466 | 30.00th=[ 988], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:17:06.466 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1123], 00:17:06.466 | 99.00th=[ 1205], 99.50th=[ 1221], 99.90th=[ 1319], 99.95th=[ 1319], 00:17:06.466 | 99.99th=[ 1549] 00:17:06.466 bw ( KiB/s): min= 3768, max= 3864, per=57.97%, avg=3808.00, stdev=41.95, samples=5 00:17:06.466 iops : min= 942, max= 966, avg=952.00, stdev=10.49, samples=5 00:17:06.466 lat (usec) : 750=0.24%, 1000=34.24% 00:17:06.466 lat (msec) : 2=65.48% 00:17:06.466 cpu : usr=1.75%, sys=3.39%, ctx=2471, majf=0, minf=2 00:17:06.466 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:06.466 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.466 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:06.466 issued rwts: total=2471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:06.466 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:06.466 00:17:06.466 Run status group 0 (all jobs): 00:17:06.466 READ: bw=6569KiB/s (6727kB/s), 97.6KiB/s-3765KiB/s (100.0kB/s-3856kB/s), io=20.8MiB (21.8MB), run=2624-3245msec 00:17:06.466 00:17:06.467 Disk stats (read/write): 00:17:06.467 nvme0n1: ios=684/0, merge=0/0, ticks=2751/0, in_queue=2751, util=93.66% 00:17:06.467 nvme0n2: ios=2097/0, merge=0/0, ticks=2923/0, in_queue=2923, util=94.24% 00:17:06.467 nvme0n3: ios=64/0, merge=0/0, ticks=2580/0, in_queue=2580, util=96.03% 00:17:06.467 nvme0n4: ios=2461/0, merge=0/0, ticks=2306/0, in_queue=2306, util=96.42% 00:17:06.727 20:48:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:06.727 20:48:31 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:06.989 20:48:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:06.989 20:48:31 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:07.250 20:48:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:07.250 20:48:31 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:07.509 20:48:31 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:07.509 20:48:31 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:07.509 20:48:32 -- target/fio.sh@69 -- # fio_status=0 00:17:07.509 20:48:32 -- target/fio.sh@70 -- # wait 2772574 00:17:07.509 20:48:32 -- target/fio.sh@70 -- # fio_status=4 00:17:07.509 20:48:32 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:07.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.770 20:48:32 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:07.770 20:48:32 -- common/autotest_common.sh@1205 -- # local i=0 00:17:07.770 20:48:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:07.770 20:48:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:07.770 20:48:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:07.770 20:48:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:07.770 20:48:32 -- common/autotest_common.sh@1217 -- # return 0 00:17:07.770 20:48:32 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:07.770 20:48:32 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:07.770 nvmf hotplug test: fio failed as expected 00:17:07.770 20:48:32 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.031 20:48:32 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:08.031 20:48:32 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:08.031 20:48:32 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:08.031 20:48:32 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:08.031 20:48:32 -- target/fio.sh@91 -- # nvmftestfini 00:17:08.031 20:48:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:08.031 20:48:32 -- nvmf/common.sh@117 -- # sync 00:17:08.031 20:48:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.031 20:48:32 -- nvmf/common.sh@120 -- # set +e 00:17:08.031 20:48:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.031 20:48:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.031 rmmod nvme_tcp 00:17:08.031 rmmod nvme_fabrics 00:17:08.031 rmmod nvme_keyring 00:17:08.031 20:48:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.031 20:48:32 -- nvmf/common.sh@124 -- # set -e 00:17:08.031 20:48:32 -- nvmf/common.sh@125 -- # return 0 00:17:08.031 20:48:32 -- nvmf/common.sh@478 -- # '[' -n 2768914 ']' 00:17:08.031 20:48:32 -- nvmf/common.sh@479 -- # killprocess 2768914 00:17:08.031 20:48:32 -- common/autotest_common.sh@936 -- # '[' -z 2768914 ']' 00:17:08.031 20:48:32 -- common/autotest_common.sh@940 -- # kill -0 2768914 00:17:08.031 20:48:32 -- common/autotest_common.sh@941 -- # uname 00:17:08.031 20:48:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:08.031 20:48:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2768914 00:17:08.031 20:48:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:08.031 20:48:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:08.031 20:48:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2768914' 00:17:08.031 killing process with pid 2768914 00:17:08.031 20:48:32 -- common/autotest_common.sh@955 -- # kill 2768914 00:17:08.031 20:48:32 -- common/autotest_common.sh@960 -- # wait 2768914 00:17:08.291 20:48:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:08.291 20:48:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:08.291 20:48:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:08.291 20:48:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:08.291 20:48:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:08.291 20:48:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.291 20:48:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:08.291 20:48:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.201 20:48:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:10.201 00:17:10.201 real 0m29.654s 00:17:10.201 user 2m44.040s 00:17:10.201 sys 0m9.130s 00:17:10.201 20:48:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:10.201 20:48:34 -- common/autotest_common.sh@10 -- # set +x 00:17:10.201 ************************************ 00:17:10.201 END TEST nvmf_fio_target 00:17:10.201 ************************************ 00:17:10.461 20:48:34 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:10.461 20:48:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:10.461 20:48:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:10.461 20:48:34 -- common/autotest_common.sh@10 -- # set +x 00:17:10.461 ************************************ 00:17:10.461 START TEST nvmf_bdevio 00:17:10.461 ************************************ 00:17:10.461 20:48:35 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:10.722 * Looking for test storage... 00:17:10.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.722 20:48:35 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.722 20:48:35 -- nvmf/common.sh@7 -- # uname -s 00:17:10.722 20:48:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.722 20:48:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.722 20:48:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.722 20:48:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.722 20:48:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.722 20:48:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.722 20:48:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.722 20:48:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.722 20:48:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.722 20:48:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.722 20:48:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:10.722 20:48:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:10.722 20:48:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.722 20:48:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.722 20:48:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.722 20:48:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.722 20:48:35 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.722 20:48:35 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.722 20:48:35 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.722 20:48:35 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.722 20:48:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.722 20:48:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.722 20:48:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.722 20:48:35 -- paths/export.sh@5 -- # export PATH 00:17:10.722 20:48:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.722 20:48:35 -- nvmf/common.sh@47 -- # : 0 00:17:10.722 20:48:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:10.722 20:48:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:10.722 20:48:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.722 20:48:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.722 20:48:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.722 20:48:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:10.722 20:48:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:10.722 20:48:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:10.722 20:48:35 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:10.722 20:48:35 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:10.722 20:48:35 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:10.722 20:48:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:10.722 20:48:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.722 20:48:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:10.722 20:48:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:10.722 20:48:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:10.722 20:48:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.722 20:48:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:10.722 20:48:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.722 20:48:35 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:10.722 20:48:35 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:10.722 20:48:35 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:10.722 20:48:35 -- common/autotest_common.sh@10 -- # set +x 00:17:17.422 20:48:42 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:17.422 20:48:42 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:17.422 20:48:42 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:17.422 20:48:42 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:17.422 20:48:42 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:17.422 20:48:42 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:17.422 20:48:42 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:17.422 20:48:42 -- nvmf/common.sh@295 -- # net_devs=() 00:17:17.422 20:48:42 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:17.422 20:48:42 -- nvmf/common.sh@296 -- # e810=() 00:17:17.422 20:48:42 -- nvmf/common.sh@296 -- # local -ga e810 00:17:17.422 20:48:42 -- nvmf/common.sh@297 -- # x722=() 00:17:17.422 20:48:42 -- nvmf/common.sh@297 -- # local -ga x722 00:17:17.422 20:48:42 -- nvmf/common.sh@298 -- # mlx=() 00:17:17.422 20:48:42 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:17.422 20:48:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.422 20:48:42 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.422 20:48:42 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.422 20:48:42 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.422 20:48:42 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.422 20:48:42 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.423 20:48:42 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.423 20:48:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.423 20:48:42 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.423 20:48:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.423 20:48:42 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.423 20:48:42 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:17.423 20:48:42 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:17.423 20:48:42 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:17.423 20:48:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.423 20:48:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:17.423 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:17.423 20:48:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:17.423 20:48:42 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:17.423 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:17.423 20:48:42 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:17.423 20:48:42 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.423 20:48:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.423 20:48:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:17.423 20:48:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.423 20:48:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:17.423 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:17.423 20:48:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.423 20:48:42 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:17.423 20:48:42 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.423 20:48:42 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:17.423 20:48:42 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.423 20:48:42 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:17.423 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:17.423 20:48:42 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.423 20:48:42 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:17.423 20:48:42 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:17.423 20:48:42 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:17.423 20:48:42 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:17.423 20:48:42 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.423 20:48:42 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.423 20:48:42 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.423 20:48:42 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:17.423 20:48:42 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.423 20:48:42 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.423 20:48:42 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:17.423 20:48:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.423 20:48:42 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.423 20:48:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:17.423 20:48:42 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:17.684 20:48:42 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.684 20:48:42 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.684 20:48:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.684 20:48:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.684 20:48:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:17.684 20:48:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.684 20:48:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.945 20:48:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.945 20:48:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:17.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.475 ms 00:17:17.945 00:17:17.945 --- 10.0.0.2 ping statistics --- 00:17:17.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.945 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:17:17.945 20:48:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:17:17.946 00:17:17.946 --- 10.0.0.1 ping statistics --- 00:17:17.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.946 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:17:17.946 20:48:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.946 20:48:42 -- nvmf/common.sh@411 -- # return 0 00:17:17.946 20:48:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:17.946 20:48:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.946 20:48:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:17.946 20:48:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:17.946 20:48:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.946 20:48:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:17.946 20:48:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:17.946 20:48:42 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:17.946 20:48:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:17.946 20:48:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:17.946 20:48:42 -- common/autotest_common.sh@10 -- # set +x 00:17:17.946 20:48:42 -- nvmf/common.sh@470 -- # nvmfpid=2778019 00:17:17.946 20:48:42 -- nvmf/common.sh@471 -- # waitforlisten 2778019 00:17:17.946 20:48:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:17.946 20:48:42 -- common/autotest_common.sh@817 -- # '[' -z 2778019 ']' 00:17:17.946 20:48:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.946 20:48:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:17.946 20:48:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.946 20:48:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:17.946 20:48:42 -- common/autotest_common.sh@10 -- # set +x 00:17:17.946 [2024-04-24 20:48:42.458135] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:17:17.946 [2024-04-24 20:48:42.458200] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.946 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.946 [2024-04-24 20:48:42.548856] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.206 [2024-04-24 20:48:42.639204] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.206 [2024-04-24 20:48:42.639266] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.206 [2024-04-24 20:48:42.639274] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.206 [2024-04-24 20:48:42.639280] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.206 [2024-04-24 20:48:42.639287] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.206 [2024-04-24 20:48:42.639458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:18.206 [2024-04-24 20:48:42.639609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:18.206 [2024-04-24 20:48:42.640072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:18.206 [2024-04-24 20:48:42.640074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.775 20:48:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:18.775 20:48:43 -- common/autotest_common.sh@850 -- # return 0 00:17:18.775 20:48:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:18.775 20:48:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:18.775 20:48:43 -- common/autotest_common.sh@10 -- # set +x 00:17:18.775 20:48:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.775 20:48:43 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.775 20:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.775 20:48:43 -- common/autotest_common.sh@10 -- # set +x 00:17:18.775 [2024-04-24 20:48:43.402407] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.775 20:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.775 20:48:43 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.775 20:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.775 20:48:43 -- common/autotest_common.sh@10 -- # set +x 00:17:19.035 Malloc0 00:17:19.035 20:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.035 20:48:43 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:19.035 20:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.035 20:48:43 -- common/autotest_common.sh@10 -- # set +x 00:17:19.035 20:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.035 20:48:43 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:19.035 20:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.035 20:48:43 -- common/autotest_common.sh@10 -- # set +x 00:17:19.035 20:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.035 20:48:43 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.035 20:48:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.035 20:48:43 -- common/autotest_common.sh@10 -- # set +x 00:17:19.035 [2024-04-24 20:48:43.467540] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.035 20:48:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.035 20:48:43 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:19.035 20:48:43 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:19.035 20:48:43 -- nvmf/common.sh@521 -- # config=() 00:17:19.035 20:48:43 -- nvmf/common.sh@521 -- # local subsystem config 00:17:19.035 20:48:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:19.035 20:48:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:19.035 { 00:17:19.035 "params": { 00:17:19.035 "name": "Nvme$subsystem", 00:17:19.035 "trtype": "$TEST_TRANSPORT", 00:17:19.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:19.035 "adrfam": "ipv4", 00:17:19.035 "trsvcid": "$NVMF_PORT", 00:17:19.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:19.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:19.035 "hdgst": ${hdgst:-false}, 00:17:19.035 "ddgst": ${ddgst:-false} 00:17:19.035 }, 00:17:19.035 "method": "bdev_nvme_attach_controller" 00:17:19.035 } 00:17:19.035 EOF 00:17:19.035 )") 00:17:19.035 20:48:43 -- nvmf/common.sh@543 -- # cat 00:17:19.035 20:48:43 -- nvmf/common.sh@545 -- # jq . 00:17:19.035 20:48:43 -- nvmf/common.sh@546 -- # IFS=, 00:17:19.035 20:48:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:19.035 "params": { 00:17:19.035 "name": "Nvme1", 00:17:19.035 "trtype": "tcp", 00:17:19.035 "traddr": "10.0.0.2", 00:17:19.035 "adrfam": "ipv4", 00:17:19.035 "trsvcid": "4420", 00:17:19.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.035 "hdgst": false, 00:17:19.035 "ddgst": false 00:17:19.035 }, 00:17:19.035 "method": "bdev_nvme_attach_controller" 00:17:19.035 }' 00:17:19.035 [2024-04-24 20:48:43.522385] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:17:19.035 [2024-04-24 20:48:43.522455] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2778160 ] 00:17:19.035 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.035 [2024-04-24 20:48:43.605441] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:19.296 [2024-04-24 20:48:43.700537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.296 [2024-04-24 20:48:43.700671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:19.296 [2024-04-24 20:48:43.700674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.556 I/O targets: 00:17:19.556 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:19.556 00:17:19.556 00:17:19.556 CUnit - A unit testing framework for C - Version 2.1-3 00:17:19.556 http://cunit.sourceforge.net/ 00:17:19.556 00:17:19.556 00:17:19.556 Suite: bdevio tests on: Nvme1n1 00:17:19.556 Test: blockdev write read block ...passed 00:17:19.556 Test: blockdev write zeroes read block ...passed 00:17:19.556 Test: blockdev write zeroes read no split ...passed 00:17:19.556 Test: blockdev write zeroes read split ...passed 00:17:19.556 Test: blockdev write zeroes read split partial ...passed 00:17:19.556 Test: blockdev reset ...[2024-04-24 20:48:44.142616] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:19.556 [2024-04-24 20:48:44.142684] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117c9c0 (9): Bad file descriptor 00:17:19.556 [2024-04-24 20:48:44.157860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:19.556 passed 00:17:19.816 Test: blockdev write read 8 blocks ...passed 00:17:19.816 Test: blockdev write read size > 128k ...passed 00:17:19.816 Test: blockdev write read invalid size ...passed 00:17:19.816 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:19.816 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:19.816 Test: blockdev write read max offset ...passed 00:17:19.816 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:19.816 Test: blockdev writev readv 8 blocks ...passed 00:17:19.816 Test: blockdev writev readv 30 x 1block ...passed 00:17:19.816 Test: blockdev writev readv block ...passed 00:17:19.816 Test: blockdev writev readv size > 128k ...passed 00:17:19.816 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:19.816 Test: blockdev comparev and writev ...[2024-04-24 20:48:44.380138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.816 [2024-04-24 20:48:44.380164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:19.816 [2024-04-24 20:48:44.380175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.816 [2024-04-24 20:48:44.380181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:19.816 [2024-04-24 20:48:44.380672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.816 [2024-04-24 20:48:44.380680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:19.816 [2024-04-24 20:48:44.380689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.816 [2024-04-24 20:48:44.380694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:19.816 [2024-04-24 20:48:44.381211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.816 [2024-04-24 20:48:44.381220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:19.817 [2024-04-24 20:48:44.381229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.817 [2024-04-24 20:48:44.381234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:19.817 [2024-04-24 20:48:44.381681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.817 [2024-04-24 20:48:44.381688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:19.817 [2024-04-24 20:48:44.381697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:19.817 [2024-04-24 20:48:44.381702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:19.817 passed 00:17:20.078 Test: blockdev nvme passthru rw ...passed 00:17:20.078 Test: blockdev nvme passthru vendor specific ...[2024-04-24 20:48:44.466416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:20.078 [2024-04-24 20:48:44.466427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:20.078 [2024-04-24 20:48:44.466782] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:20.078 [2024-04-24 20:48:44.466790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:20.078 [2024-04-24 20:48:44.467119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:20.078 [2024-04-24 20:48:44.467126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:20.078 [2024-04-24 20:48:44.467462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:20.078 [2024-04-24 20:48:44.467469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:20.078 passed 00:17:20.078 Test: blockdev nvme admin passthru ...passed 00:17:20.078 Test: blockdev copy ...passed 00:17:20.078 00:17:20.078 Run Summary: Type Total Ran Passed Failed Inactive 00:17:20.078 suites 1 1 n/a 0 0 00:17:20.078 tests 23 23 23 0 0 00:17:20.078 asserts 152 152 152 0 n/a 00:17:20.078 00:17:20.078 Elapsed time = 1.098 seconds 00:17:20.078 20:48:44 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.078 20:48:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.078 20:48:44 -- common/autotest_common.sh@10 -- # set +x 00:17:20.078 20:48:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.078 20:48:44 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:20.078 20:48:44 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:20.078 20:48:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:20.078 20:48:44 -- nvmf/common.sh@117 -- # sync 00:17:20.078 20:48:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:20.078 20:48:44 -- nvmf/common.sh@120 -- # set +e 00:17:20.078 20:48:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:20.078 20:48:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:20.078 rmmod nvme_tcp 00:17:20.078 rmmod nvme_fabrics 00:17:20.078 rmmod nvme_keyring 00:17:20.339 20:48:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:20.339 20:48:44 -- nvmf/common.sh@124 -- # set -e 00:17:20.339 20:48:44 -- nvmf/common.sh@125 -- # return 0 00:17:20.339 20:48:44 -- nvmf/common.sh@478 -- # '[' -n 2778019 ']' 00:17:20.339 20:48:44 -- nvmf/common.sh@479 -- # killprocess 2778019 00:17:20.339 20:48:44 -- common/autotest_common.sh@936 -- # '[' -z 2778019 ']' 00:17:20.339 20:48:44 -- common/autotest_common.sh@940 -- # kill -0 2778019 00:17:20.339 20:48:44 -- common/autotest_common.sh@941 -- # uname 00:17:20.339 20:48:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:20.339 20:48:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2778019 00:17:20.339 20:48:44 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:20.339 20:48:44 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:20.339 20:48:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2778019' 00:17:20.339 killing process with pid 2778019 00:17:20.339 20:48:44 -- common/autotest_common.sh@955 -- # kill 2778019 00:17:20.339 20:48:44 -- common/autotest_common.sh@960 -- # wait 2778019 00:17:20.339 20:48:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:20.339 20:48:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:20.339 20:48:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:20.339 20:48:44 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.339 20:48:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.339 20:48:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.339 20:48:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.339 20:48:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.883 20:48:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:22.884 00:17:22.884 real 0m12.024s 00:17:22.884 user 0m13.631s 00:17:22.884 sys 0m6.018s 00:17:22.884 20:48:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:22.884 20:48:47 -- common/autotest_common.sh@10 -- # set +x 00:17:22.884 ************************************ 00:17:22.884 END TEST nvmf_bdevio 00:17:22.884 ************************************ 00:17:22.884 20:48:47 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:17:22.884 20:48:47 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:22.884 20:48:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:22.884 20:48:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:22.884 20:48:47 -- common/autotest_common.sh@10 -- # set +x 00:17:22.884 ************************************ 00:17:22.884 START TEST nvmf_bdevio_no_huge 00:17:22.884 ************************************ 00:17:22.884 20:48:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:22.884 * Looking for test storage... 00:17:22.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.884 20:48:47 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.884 20:48:47 -- nvmf/common.sh@7 -- # uname -s 00:17:22.884 20:48:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.884 20:48:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.884 20:48:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.884 20:48:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.884 20:48:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.884 20:48:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.884 20:48:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.884 20:48:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.884 20:48:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.884 20:48:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.884 20:48:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:22.884 20:48:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:22.884 20:48:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.884 20:48:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.884 20:48:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.884 20:48:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.884 20:48:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.884 20:48:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.884 20:48:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.884 20:48:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.884 20:48:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.884 20:48:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.884 20:48:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.884 20:48:47 -- paths/export.sh@5 -- # export PATH 00:17:22.884 20:48:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.884 20:48:47 -- nvmf/common.sh@47 -- # : 0 00:17:22.884 20:48:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:22.884 20:48:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:22.884 20:48:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.884 20:48:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.884 20:48:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.884 20:48:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:22.884 20:48:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:22.884 20:48:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:22.884 20:48:47 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:22.884 20:48:47 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:22.884 20:48:47 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:22.884 20:48:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:22.884 20:48:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.884 20:48:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:22.884 20:48:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:22.884 20:48:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:22.884 20:48:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.884 20:48:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.884 20:48:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.884 20:48:47 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:22.884 20:48:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:22.884 20:48:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:22.884 20:48:47 -- common/autotest_common.sh@10 -- # set +x 00:17:31.024 20:48:54 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:31.024 20:48:54 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:31.024 20:48:54 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:31.024 20:48:54 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:31.024 20:48:54 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:31.024 20:48:54 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:31.024 20:48:54 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:31.024 20:48:54 -- nvmf/common.sh@295 -- # net_devs=() 00:17:31.024 20:48:54 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:31.024 20:48:54 -- nvmf/common.sh@296 -- # e810=() 00:17:31.024 20:48:54 -- nvmf/common.sh@296 -- # local -ga e810 00:17:31.024 20:48:54 -- nvmf/common.sh@297 -- # x722=() 00:17:31.024 20:48:54 -- nvmf/common.sh@297 -- # local -ga x722 00:17:31.024 20:48:54 -- nvmf/common.sh@298 -- # mlx=() 00:17:31.024 20:48:54 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:31.024 20:48:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.024 20:48:54 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.024 20:48:54 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.024 20:48:54 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.024 20:48:54 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.024 20:48:54 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.024 20:48:54 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.024 20:48:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.024 20:48:54 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.024 20:48:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.024 20:48:54 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.024 20:48:54 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:31.024 20:48:54 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:31.024 20:48:54 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:31.024 20:48:54 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:31.024 20:48:54 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:31.024 20:48:54 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:31.024 20:48:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:31.024 20:48:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:31.024 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:31.024 20:48:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:31.024 20:48:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:31.024 20:48:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:31.025 20:48:54 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:31.025 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:31.025 20:48:54 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:31.025 20:48:54 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:31.025 20:48:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.025 20:48:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:31.025 20:48:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.025 20:48:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:31.025 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:31.025 20:48:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.025 20:48:54 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:31.025 20:48:54 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.025 20:48:54 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:31.025 20:48:54 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.025 20:48:54 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:31.025 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:31.025 20:48:54 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.025 20:48:54 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:31.025 20:48:54 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:31.025 20:48:54 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:31.025 20:48:54 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.025 20:48:54 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.025 20:48:54 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.025 20:48:54 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:31.025 20:48:54 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.025 20:48:54 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.025 20:48:54 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:31.025 20:48:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.025 20:48:54 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.025 20:48:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:31.025 20:48:54 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:31.025 20:48:54 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.025 20:48:54 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:31.025 20:48:54 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:31.025 20:48:54 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:31.025 20:48:54 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:31.025 20:48:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.025 20:48:54 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.025 20:48:54 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.025 20:48:54 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:31.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:17:31.025 00:17:31.025 --- 10.0.0.2 ping statistics --- 00:17:31.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.025 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:17:31.025 20:48:54 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:17:31.025 00:17:31.025 --- 10.0.0.1 ping statistics --- 00:17:31.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.025 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:17:31.025 20:48:54 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.025 20:48:54 -- nvmf/common.sh@411 -- # return 0 00:17:31.025 20:48:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:31.025 20:48:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.025 20:48:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:31.025 20:48:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.025 20:48:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:31.025 20:48:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:31.025 20:48:54 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:31.025 20:48:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:31.025 20:48:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:31.025 20:48:54 -- common/autotest_common.sh@10 -- # set +x 00:17:31.025 20:48:54 -- nvmf/common.sh@470 -- # nvmfpid=2782637 00:17:31.025 20:48:54 -- nvmf/common.sh@471 -- # waitforlisten 2782637 00:17:31.025 20:48:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:31.025 20:48:54 -- common/autotest_common.sh@817 -- # '[' -z 2782637 ']' 00:17:31.025 20:48:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.025 20:48:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:31.025 20:48:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.025 20:48:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:31.025 20:48:54 -- common/autotest_common.sh@10 -- # set +x 00:17:31.025 [2024-04-24 20:48:54.816977] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:17:31.025 [2024-04-24 20:48:54.817045] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:31.025 [2024-04-24 20:48:54.909874] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.025 [2024-04-24 20:48:55.012411] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.025 [2024-04-24 20:48:55.012461] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.025 [2024-04-24 20:48:55.012470] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.025 [2024-04-24 20:48:55.012477] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.025 [2024-04-24 20:48:55.012483] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.025 [2024-04-24 20:48:55.012655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:31.025 [2024-04-24 20:48:55.012773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:31.025 [2024-04-24 20:48:55.012976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.025 [2024-04-24 20:48:55.012976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:31.287 20:48:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:31.287 20:48:55 -- common/autotest_common.sh@850 -- # return 0 00:17:31.287 20:48:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:31.287 20:48:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:31.287 20:48:55 -- common/autotest_common.sh@10 -- # set +x 00:17:31.287 20:48:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.287 20:48:55 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.287 20:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.287 20:48:55 -- common/autotest_common.sh@10 -- # set +x 00:17:31.287 [2024-04-24 20:48:55.759371] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.287 20:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.287 20:48:55 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:31.287 20:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.287 20:48:55 -- common/autotest_common.sh@10 -- # set +x 00:17:31.287 Malloc0 00:17:31.287 20:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.287 20:48:55 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:31.287 20:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.287 20:48:55 -- common/autotest_common.sh@10 -- # set +x 00:17:31.287 20:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.287 20:48:55 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.287 20:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.287 20:48:55 -- common/autotest_common.sh@10 -- # set +x 00:17:31.287 20:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.287 20:48:55 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.287 20:48:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.287 20:48:55 -- common/autotest_common.sh@10 -- # set +x 00:17:31.287 [2024-04-24 20:48:55.813009] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.287 20:48:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.287 20:48:55 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:31.287 20:48:55 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:31.287 20:48:55 -- nvmf/common.sh@521 -- # config=() 00:17:31.287 20:48:55 -- nvmf/common.sh@521 -- # local subsystem config 00:17:31.287 20:48:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:31.287 20:48:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:31.287 { 00:17:31.287 "params": { 00:17:31.287 "name": "Nvme$subsystem", 00:17:31.287 "trtype": "$TEST_TRANSPORT", 00:17:31.287 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:31.287 "adrfam": "ipv4", 00:17:31.287 "trsvcid": "$NVMF_PORT", 00:17:31.287 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:31.287 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:31.287 "hdgst": ${hdgst:-false}, 00:17:31.287 "ddgst": ${ddgst:-false} 00:17:31.287 }, 00:17:31.287 "method": "bdev_nvme_attach_controller" 00:17:31.287 } 00:17:31.287 EOF 00:17:31.287 )") 00:17:31.287 20:48:55 -- nvmf/common.sh@543 -- # cat 00:17:31.287 20:48:55 -- nvmf/common.sh@545 -- # jq . 00:17:31.287 20:48:55 -- nvmf/common.sh@546 -- # IFS=, 00:17:31.287 20:48:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:31.287 "params": { 00:17:31.287 "name": "Nvme1", 00:17:31.287 "trtype": "tcp", 00:17:31.287 "traddr": "10.0.0.2", 00:17:31.287 "adrfam": "ipv4", 00:17:31.287 "trsvcid": "4420", 00:17:31.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.287 "hdgst": false, 00:17:31.287 "ddgst": false 00:17:31.287 }, 00:17:31.287 "method": "bdev_nvme_attach_controller" 00:17:31.287 }' 00:17:31.287 [2024-04-24 20:48:55.867646] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:17:31.287 [2024-04-24 20:48:55.867716] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2782853 ] 00:17:31.548 [2024-04-24 20:48:55.954502] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:31.548 [2024-04-24 20:48:56.057002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.548 [2024-04-24 20:48:56.057202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.548 [2024-04-24 20:48:56.057207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.809 I/O targets: 00:17:31.809 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:31.809 00:17:31.809 00:17:31.809 CUnit - A unit testing framework for C - Version 2.1-3 00:17:31.809 http://cunit.sourceforge.net/ 00:17:31.809 00:17:31.809 00:17:31.809 Suite: bdevio tests on: Nvme1n1 00:17:31.809 Test: blockdev write read block ...passed 00:17:32.070 Test: blockdev write zeroes read block ...passed 00:17:32.070 Test: blockdev write zeroes read no split ...passed 00:17:32.070 Test: blockdev write zeroes read split ...passed 00:17:32.070 Test: blockdev write zeroes read split partial ...passed 00:17:32.070 Test: blockdev reset ...[2024-04-24 20:48:56.523091] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:32.070 [2024-04-24 20:48:56.523153] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xad3bc0 (9): Bad file descriptor 00:17:32.070 [2024-04-24 20:48:56.592592] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:32.070 passed 00:17:32.070 Test: blockdev write read 8 blocks ...passed 00:17:32.070 Test: blockdev write read size > 128k ...passed 00:17:32.070 Test: blockdev write read invalid size ...passed 00:17:32.070 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:32.070 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:32.070 Test: blockdev write read max offset ...passed 00:17:32.332 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:32.332 Test: blockdev writev readv 8 blocks ...passed 00:17:32.332 Test: blockdev writev readv 30 x 1block ...passed 00:17:32.332 Test: blockdev writev readv block ...passed 00:17:32.332 Test: blockdev writev readv size > 128k ...passed 00:17:32.332 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:32.332 Test: blockdev comparev and writev ...[2024-04-24 20:48:56.856141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.332 [2024-04-24 20:48:56.856166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:32.332 [2024-04-24 20:48:56.856176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.332 [2024-04-24 20:48:56.856182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:32.332 [2024-04-24 20:48:56.856699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.332 [2024-04-24 20:48:56.856707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:32.332 [2024-04-24 20:48:56.856717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.332 [2024-04-24 20:48:56.856722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:32.332 [2024-04-24 20:48:56.857211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.332 [2024-04-24 20:48:56.857220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:32.332 [2024-04-24 20:48:56.857229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.332 [2024-04-24 20:48:56.857234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:32.332 [2024-04-24 20:48:56.857720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.332 [2024-04-24 20:48:56.857733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:32.332 [2024-04-24 20:48:56.857743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:32.332 [2024-04-24 20:48:56.857748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:32.332 passed 00:17:32.332 Test: blockdev nvme passthru rw ...passed 00:17:32.332 Test: blockdev nvme passthru vendor specific ...[2024-04-24 20:48:56.942399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.332 [2024-04-24 20:48:56.942410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:32.332 [2024-04-24 20:48:56.942761] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.332 [2024-04-24 20:48:56.942769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:32.332 [2024-04-24 20:48:56.943105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.332 [2024-04-24 20:48:56.943112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:32.332 [2024-04-24 20:48:56.943459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.332 [2024-04-24 20:48:56.943468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:32.332 passed 00:17:32.332 Test: blockdev nvme admin passthru ...passed 00:17:32.592 Test: blockdev copy ...passed 00:17:32.592 00:17:32.592 Run Summary: Type Total Ran Passed Failed Inactive 00:17:32.592 suites 1 1 n/a 0 0 00:17:32.592 tests 23 23 23 0 0 00:17:32.592 asserts 152 152 152 0 n/a 00:17:32.592 00:17:32.592 Elapsed time = 1.264 seconds 00:17:32.854 20:48:57 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.854 20:48:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:32.854 20:48:57 -- common/autotest_common.sh@10 -- # set +x 00:17:32.854 20:48:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:32.854 20:48:57 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:32.854 20:48:57 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:32.854 20:48:57 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:32.854 20:48:57 -- nvmf/common.sh@117 -- # sync 00:17:32.854 20:48:57 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.854 20:48:57 -- nvmf/common.sh@120 -- # set +e 00:17:32.854 20:48:57 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.854 20:48:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.854 rmmod nvme_tcp 00:17:32.854 rmmod nvme_fabrics 00:17:32.854 rmmod nvme_keyring 00:17:32.854 20:48:57 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.854 20:48:57 -- nvmf/common.sh@124 -- # set -e 00:17:32.854 20:48:57 -- nvmf/common.sh@125 -- # return 0 00:17:32.854 20:48:57 -- nvmf/common.sh@478 -- # '[' -n 2782637 ']' 00:17:32.854 20:48:57 -- nvmf/common.sh@479 -- # killprocess 2782637 00:17:32.854 20:48:57 -- common/autotest_common.sh@936 -- # '[' -z 2782637 ']' 00:17:32.854 20:48:57 -- common/autotest_common.sh@940 -- # kill -0 2782637 00:17:32.854 20:48:57 -- common/autotest_common.sh@941 -- # uname 00:17:32.854 20:48:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:32.854 20:48:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2782637 00:17:32.854 20:48:57 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:17:32.854 20:48:57 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:17:32.854 20:48:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2782637' 00:17:32.854 killing process with pid 2782637 00:17:32.854 20:48:57 -- common/autotest_common.sh@955 -- # kill 2782637 00:17:32.854 20:48:57 -- common/autotest_common.sh@960 -- # wait 2782637 00:17:33.114 20:48:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:33.114 20:48:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:33.114 20:48:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:33.114 20:48:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:33.114 20:48:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:33.114 20:48:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.114 20:48:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.114 20:48:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.657 20:48:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.657 00:17:35.657 real 0m12.550s 00:17:35.657 user 0m15.156s 00:17:35.657 sys 0m6.597s 00:17:35.657 20:48:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:35.657 20:48:59 -- common/autotest_common.sh@10 -- # set +x 00:17:35.657 ************************************ 00:17:35.657 END TEST nvmf_bdevio_no_huge 00:17:35.657 ************************************ 00:17:35.657 20:48:59 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:35.657 20:48:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:35.657 20:48:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.657 20:48:59 -- common/autotest_common.sh@10 -- # set +x 00:17:35.657 ************************************ 00:17:35.657 START TEST nvmf_tls 00:17:35.657 ************************************ 00:17:35.657 20:48:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:35.657 * Looking for test storage... 00:17:35.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:35.657 20:49:00 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.657 20:49:00 -- nvmf/common.sh@7 -- # uname -s 00:17:35.657 20:49:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.657 20:49:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.657 20:49:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.657 20:49:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.657 20:49:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.657 20:49:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.657 20:49:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.657 20:49:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.657 20:49:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.657 20:49:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.657 20:49:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:35.657 20:49:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:17:35.657 20:49:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.658 20:49:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.658 20:49:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.658 20:49:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.658 20:49:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.658 20:49:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.658 20:49:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.658 20:49:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.658 20:49:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.658 20:49:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.658 20:49:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.658 20:49:00 -- paths/export.sh@5 -- # export PATH 00:17:35.658 20:49:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.658 20:49:00 -- nvmf/common.sh@47 -- # : 0 00:17:35.658 20:49:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.658 20:49:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.658 20:49:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.658 20:49:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.658 20:49:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.658 20:49:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.658 20:49:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.658 20:49:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.658 20:49:00 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:35.658 20:49:00 -- target/tls.sh@62 -- # nvmftestinit 00:17:35.658 20:49:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:35.658 20:49:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.658 20:49:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:35.658 20:49:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:35.658 20:49:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:35.658 20:49:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.658 20:49:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.658 20:49:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.658 20:49:00 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:17:35.658 20:49:00 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:35.658 20:49:00 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.658 20:49:00 -- common/autotest_common.sh@10 -- # set +x 00:17:43.796 20:49:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:43.796 20:49:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:43.796 20:49:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:43.796 20:49:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:43.796 20:49:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:43.796 20:49:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:43.796 20:49:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:43.796 20:49:06 -- nvmf/common.sh@295 -- # net_devs=() 00:17:43.796 20:49:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:43.796 20:49:06 -- nvmf/common.sh@296 -- # e810=() 00:17:43.796 20:49:06 -- nvmf/common.sh@296 -- # local -ga e810 00:17:43.796 20:49:06 -- nvmf/common.sh@297 -- # x722=() 00:17:43.796 20:49:06 -- nvmf/common.sh@297 -- # local -ga x722 00:17:43.796 20:49:06 -- nvmf/common.sh@298 -- # mlx=() 00:17:43.796 20:49:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:43.796 20:49:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.796 20:49:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.796 20:49:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.796 20:49:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.796 20:49:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.796 20:49:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.796 20:49:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.796 20:49:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.796 20:49:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.796 20:49:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.796 20:49:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.796 20:49:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:43.796 20:49:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:43.796 20:49:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:43.796 20:49:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.796 20:49:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:43.796 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:43.796 20:49:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:43.796 20:49:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:43.796 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:43.796 20:49:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:43.796 20:49:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.796 20:49:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.796 20:49:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:43.796 20:49:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.796 20:49:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:43.796 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:43.796 20:49:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.796 20:49:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:43.796 20:49:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.796 20:49:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:43.796 20:49:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.796 20:49:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:43.796 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:43.796 20:49:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.796 20:49:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:43.796 20:49:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:43.796 20:49:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:43.796 20:49:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:43.797 20:49:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.797 20:49:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.797 20:49:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.797 20:49:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:43.797 20:49:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.797 20:49:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.797 20:49:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:43.797 20:49:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.797 20:49:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.797 20:49:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:43.797 20:49:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:43.797 20:49:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.797 20:49:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.797 20:49:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.797 20:49:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.797 20:49:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:43.797 20:49:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.797 20:49:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.797 20:49:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.797 20:49:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:43.797 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.797 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:17:43.797 00:17:43.797 --- 10.0.0.2 ping statistics --- 00:17:43.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.797 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:17:43.797 20:49:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.797 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.797 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:17:43.797 00:17:43.797 --- 10.0.0.1 ping statistics --- 00:17:43.797 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.797 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:17:43.797 20:49:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.797 20:49:07 -- nvmf/common.sh@411 -- # return 0 00:17:43.797 20:49:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:43.797 20:49:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.797 20:49:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:43.797 20:49:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:43.797 20:49:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.797 20:49:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:43.797 20:49:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:43.797 20:49:07 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:43.797 20:49:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:43.797 20:49:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:43.797 20:49:07 -- common/autotest_common.sh@10 -- # set +x 00:17:43.797 20:49:07 -- nvmf/common.sh@470 -- # nvmfpid=2787362 00:17:43.797 20:49:07 -- nvmf/common.sh@471 -- # waitforlisten 2787362 00:17:43.797 20:49:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:43.797 20:49:07 -- common/autotest_common.sh@817 -- # '[' -z 2787362 ']' 00:17:43.797 20:49:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.797 20:49:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:43.797 20:49:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.797 20:49:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:43.797 20:49:07 -- common/autotest_common.sh@10 -- # set +x 00:17:43.797 [2024-04-24 20:49:07.378957] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:17:43.797 [2024-04-24 20:49:07.379023] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.797 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.797 [2024-04-24 20:49:07.450705] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.797 [2024-04-24 20:49:07.522820] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.797 [2024-04-24 20:49:07.522855] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.797 [2024-04-24 20:49:07.522863] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.797 [2024-04-24 20:49:07.522869] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.797 [2024-04-24 20:49:07.522875] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.797 [2024-04-24 20:49:07.522895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.797 20:49:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:43.797 20:49:08 -- common/autotest_common.sh@850 -- # return 0 00:17:43.797 20:49:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:43.797 20:49:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:43.797 20:49:08 -- common/autotest_common.sh@10 -- # set +x 00:17:43.797 20:49:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.797 20:49:08 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:43.797 20:49:08 -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:44.058 true 00:17:44.058 20:49:08 -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.058 20:49:08 -- target/tls.sh@73 -- # jq -r .tls_version 00:17:44.058 20:49:08 -- target/tls.sh@73 -- # version=0 00:17:44.058 20:49:08 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:44.058 20:49:08 -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:44.319 20:49:08 -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.319 20:49:08 -- target/tls.sh@81 -- # jq -r .tls_version 00:17:44.580 20:49:09 -- target/tls.sh@81 -- # version=13 00:17:44.580 20:49:09 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:44.581 20:49:09 -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:44.845 20:49:09 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.845 20:49:09 -- target/tls.sh@89 -- # jq -r .tls_version 00:17:45.170 20:49:09 -- target/tls.sh@89 -- # version=7 00:17:45.170 20:49:09 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:45.170 20:49:09 -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.170 20:49:09 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:45.170 20:49:09 -- target/tls.sh@96 -- # ktls=false 00:17:45.170 20:49:09 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:45.170 20:49:09 -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:45.430 20:49:09 -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.430 20:49:09 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:45.692 20:49:10 -- target/tls.sh@104 -- # ktls=true 00:17:45.692 20:49:10 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:45.692 20:49:10 -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:45.692 20:49:10 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.692 20:49:10 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:45.952 20:49:10 -- target/tls.sh@112 -- # ktls=false 00:17:45.952 20:49:10 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:45.952 20:49:10 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:45.952 20:49:10 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:45.952 20:49:10 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:45.952 20:49:10 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:45.952 20:49:10 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:17:45.952 20:49:10 -- nvmf/common.sh@693 -- # digest=1 00:17:45.952 20:49:10 -- nvmf/common.sh@694 -- # python - 00:17:45.952 20:49:10 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:45.952 20:49:10 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:45.952 20:49:10 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:45.952 20:49:10 -- nvmf/common.sh@691 -- # local prefix key digest 00:17:45.952 20:49:10 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:17:45.952 20:49:10 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:17:45.952 20:49:10 -- nvmf/common.sh@693 -- # digest=1 00:17:45.952 20:49:10 -- nvmf/common.sh@694 -- # python - 00:17:45.952 20:49:10 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:45.952 20:49:10 -- target/tls.sh@121 -- # mktemp 00:17:45.952 20:49:10 -- target/tls.sh@121 -- # key_path=/tmp/tmp.XeyK83IiMD 00:17:46.213 20:49:10 -- target/tls.sh@122 -- # mktemp 00:17:46.213 20:49:10 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.6fZ2T9j3U6 00:17:46.213 20:49:10 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:46.213 20:49:10 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:46.213 20:49:10 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.XeyK83IiMD 00:17:46.213 20:49:10 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6fZ2T9j3U6 00:17:46.213 20:49:10 -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:46.213 20:49:10 -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:46.474 20:49:11 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.XeyK83IiMD 00:17:46.474 20:49:11 -- target/tls.sh@49 -- # local key=/tmp/tmp.XeyK83IiMD 00:17:46.474 20:49:11 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:46.735 [2024-04-24 20:49:11.265111] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.735 20:49:11 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:46.995 20:49:11 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:47.256 [2024-04-24 20:49:11.666119] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:47.256 [2024-04-24 20:49:11.666330] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:47.256 20:49:11 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:47.256 malloc0 00:17:47.256 20:49:11 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:47.517 20:49:12 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XeyK83IiMD 00:17:47.778 [2024-04-24 20:49:12.270414] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:47.778 20:49:12 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.XeyK83IiMD 00:17:47.778 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.782 Initializing NVMe Controllers 00:17:57.782 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:57.782 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:57.782 Initialization complete. Launching workers. 00:17:57.782 ======================================================== 00:17:57.782 Latency(us) 00:17:57.782 Device Information : IOPS MiB/s Average min max 00:17:57.782 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13239.30 51.72 4834.81 1058.51 5507.31 00:17:57.782 ======================================================== 00:17:57.782 Total : 13239.30 51.72 4834.81 1058.51 5507.31 00:17:57.782 00:17:57.782 20:49:22 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XeyK83IiMD 00:17:57.782 20:49:22 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.782 20:49:22 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:57.782 20:49:22 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:57.782 20:49:22 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XeyK83IiMD' 00:17:57.782 20:49:22 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.782 20:49:22 -- target/tls.sh@28 -- # bdevperf_pid=2790264 00:17:57.782 20:49:22 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.782 20:49:22 -- target/tls.sh@31 -- # waitforlisten 2790264 /var/tmp/bdevperf.sock 00:17:57.782 20:49:22 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.782 20:49:22 -- common/autotest_common.sh@817 -- # '[' -z 2790264 ']' 00:17:57.782 20:49:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.782 20:49:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:57.782 20:49:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.782 20:49:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:57.782 20:49:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.042 [2024-04-24 20:49:22.472731] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:17:58.043 [2024-04-24 20:49:22.472801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2790264 ] 00:17:58.043 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.043 [2024-04-24 20:49:22.522238] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.043 [2024-04-24 20:49:22.573188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.043 20:49:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:58.043 20:49:22 -- common/autotest_common.sh@850 -- # return 0 00:17:58.043 20:49:22 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XeyK83IiMD 00:17:58.303 [2024-04-24 20:49:22.824908] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.303 [2024-04-24 20:49:22.824968] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:58.303 TLSTESTn1 00:17:58.303 20:49:22 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:58.563 Running I/O for 10 seconds... 00:18:08.558 00:18:08.558 Latency(us) 00:18:08.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.558 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:08.558 Verification LBA range: start 0x0 length 0x2000 00:18:08.558 TLSTESTn1 : 10.02 4261.11 16.64 0.00 0.00 30002.07 4505.60 251658.24 00:18:08.558 =================================================================================================================== 00:18:08.558 Total : 4261.11 16.64 0.00 0.00 30002.07 4505.60 251658.24 00:18:08.558 0 00:18:08.558 20:49:33 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:08.558 20:49:33 -- target/tls.sh@45 -- # killprocess 2790264 00:18:08.558 20:49:33 -- common/autotest_common.sh@936 -- # '[' -z 2790264 ']' 00:18:08.558 20:49:33 -- common/autotest_common.sh@940 -- # kill -0 2790264 00:18:08.558 20:49:33 -- common/autotest_common.sh@941 -- # uname 00:18:08.558 20:49:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.558 20:49:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2790264 00:18:08.558 20:49:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:08.558 20:49:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:08.558 20:49:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2790264' 00:18:08.558 killing process with pid 2790264 00:18:08.558 20:49:33 -- common/autotest_common.sh@955 -- # kill 2790264 00:18:08.558 Received shutdown signal, test time was about 10.000000 seconds 00:18:08.558 00:18:08.558 Latency(us) 00:18:08.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:08.558 =================================================================================================================== 00:18:08.558 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:08.558 [2024-04-24 20:49:33.136321] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:08.558 20:49:33 -- common/autotest_common.sh@960 -- # wait 2790264 00:18:08.817 20:49:33 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6fZ2T9j3U6 00:18:08.818 20:49:33 -- common/autotest_common.sh@638 -- # local es=0 00:18:08.818 20:49:33 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6fZ2T9j3U6 00:18:08.818 20:49:33 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:08.818 20:49:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:08.818 20:49:33 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:08.818 20:49:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:08.818 20:49:33 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6fZ2T9j3U6 00:18:08.818 20:49:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:08.818 20:49:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:08.818 20:49:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:08.818 20:49:33 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6fZ2T9j3U6' 00:18:08.818 20:49:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.818 20:49:33 -- target/tls.sh@28 -- # bdevperf_pid=2792465 00:18:08.818 20:49:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.818 20:49:33 -- target/tls.sh@31 -- # waitforlisten 2792465 /var/tmp/bdevperf.sock 00:18:08.818 20:49:33 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.818 20:49:33 -- common/autotest_common.sh@817 -- # '[' -z 2792465 ']' 00:18:08.818 20:49:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.818 20:49:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:08.818 20:49:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.818 20:49:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:08.818 20:49:33 -- common/autotest_common.sh@10 -- # set +x 00:18:08.818 [2024-04-24 20:49:33.299676] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:08.818 [2024-04-24 20:49:33.299736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792465 ] 00:18:08.818 EAL: No free 2048 kB hugepages reported on node 1 00:18:08.818 [2024-04-24 20:49:33.347610] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.818 [2024-04-24 20:49:33.398200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.077 20:49:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:09.077 20:49:33 -- common/autotest_common.sh@850 -- # return 0 00:18:09.077 20:49:33 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6fZ2T9j3U6 00:18:09.077 [2024-04-24 20:49:33.650385] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.077 [2024-04-24 20:49:33.650442] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:09.077 [2024-04-24 20:49:33.659002] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:09.077 [2024-04-24 20:49:33.659299] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea0020 (107): Transport endpoint is not connected 00:18:09.077 [2024-04-24 20:49:33.660295] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea0020 (9): Bad file descriptor 00:18:09.077 [2024-04-24 20:49:33.661297] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:09.077 [2024-04-24 20:49:33.661304] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:09.077 [2024-04-24 20:49:33.661310] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:09.077 request: 00:18:09.077 { 00:18:09.077 "name": "TLSTEST", 00:18:09.077 "trtype": "tcp", 00:18:09.077 "traddr": "10.0.0.2", 00:18:09.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.077 "adrfam": "ipv4", 00:18:09.077 "trsvcid": "4420", 00:18:09.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.077 "psk": "/tmp/tmp.6fZ2T9j3U6", 00:18:09.077 "method": "bdev_nvme_attach_controller", 00:18:09.077 "req_id": 1 00:18:09.077 } 00:18:09.077 Got JSON-RPC error response 00:18:09.077 response: 00:18:09.077 { 00:18:09.077 "code": -32602, 00:18:09.077 "message": "Invalid parameters" 00:18:09.077 } 00:18:09.077 20:49:33 -- target/tls.sh@36 -- # killprocess 2792465 00:18:09.077 20:49:33 -- common/autotest_common.sh@936 -- # '[' -z 2792465 ']' 00:18:09.077 20:49:33 -- common/autotest_common.sh@940 -- # kill -0 2792465 00:18:09.077 20:49:33 -- common/autotest_common.sh@941 -- # uname 00:18:09.077 20:49:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:09.077 20:49:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2792465 00:18:09.337 20:49:33 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:09.337 20:49:33 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:09.337 20:49:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2792465' 00:18:09.337 killing process with pid 2792465 00:18:09.337 20:49:33 -- common/autotest_common.sh@955 -- # kill 2792465 00:18:09.337 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.337 00:18:09.337 Latency(us) 00:18:09.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.337 =================================================================================================================== 00:18:09.337 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.337 [2024-04-24 20:49:33.731051] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:09.337 20:49:33 -- common/autotest_common.sh@960 -- # wait 2792465 00:18:09.337 20:49:33 -- target/tls.sh@37 -- # return 1 00:18:09.337 20:49:33 -- common/autotest_common.sh@641 -- # es=1 00:18:09.337 20:49:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:09.337 20:49:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:09.337 20:49:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:09.337 20:49:33 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XeyK83IiMD 00:18:09.337 20:49:33 -- common/autotest_common.sh@638 -- # local es=0 00:18:09.337 20:49:33 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XeyK83IiMD 00:18:09.337 20:49:33 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:09.337 20:49:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:09.337 20:49:33 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:09.337 20:49:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:09.337 20:49:33 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.XeyK83IiMD 00:18:09.337 20:49:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:09.337 20:49:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:09.337 20:49:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:09.337 20:49:33 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XeyK83IiMD' 00:18:09.337 20:49:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.337 20:49:33 -- target/tls.sh@28 -- # bdevperf_pid=2792605 00:18:09.337 20:49:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.337 20:49:33 -- target/tls.sh@31 -- # waitforlisten 2792605 /var/tmp/bdevperf.sock 00:18:09.337 20:49:33 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.337 20:49:33 -- common/autotest_common.sh@817 -- # '[' -z 2792605 ']' 00:18:09.337 20:49:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.337 20:49:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:09.337 20:49:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.337 20:49:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:09.337 20:49:33 -- common/autotest_common.sh@10 -- # set +x 00:18:09.337 [2024-04-24 20:49:33.885073] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:09.337 [2024-04-24 20:49:33.885125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792605 ] 00:18:09.337 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.337 [2024-04-24 20:49:33.935000] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.597 [2024-04-24 20:49:33.985237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.597 20:49:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:09.597 20:49:34 -- common/autotest_common.sh@850 -- # return 0 00:18:09.597 20:49:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.XeyK83IiMD 00:18:09.856 [2024-04-24 20:49:34.248757] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.856 [2024-04-24 20:49:34.248825] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:09.856 [2024-04-24 20:49:34.260364] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:09.857 [2024-04-24 20:49:34.260387] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:09.857 [2024-04-24 20:49:34.260410] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:09.857 [2024-04-24 20:49:34.260878] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a89020 (107): Transport endpoint is not connected 00:18:09.857 [2024-04-24 20:49:34.261873] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a89020 (9): Bad file descriptor 00:18:09.857 [2024-04-24 20:49:34.262875] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:09.857 [2024-04-24 20:49:34.262882] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:09.857 [2024-04-24 20:49:34.262888] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:09.857 request: 00:18:09.857 { 00:18:09.857 "name": "TLSTEST", 00:18:09.857 "trtype": "tcp", 00:18:09.857 "traddr": "10.0.0.2", 00:18:09.857 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:09.857 "adrfam": "ipv4", 00:18:09.857 "trsvcid": "4420", 00:18:09.857 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.857 "psk": "/tmp/tmp.XeyK83IiMD", 00:18:09.857 "method": "bdev_nvme_attach_controller", 00:18:09.857 "req_id": 1 00:18:09.857 } 00:18:09.857 Got JSON-RPC error response 00:18:09.857 response: 00:18:09.857 { 00:18:09.857 "code": -32602, 00:18:09.857 "message": "Invalid parameters" 00:18:09.857 } 00:18:09.857 20:49:34 -- target/tls.sh@36 -- # killprocess 2792605 00:18:09.857 20:49:34 -- common/autotest_common.sh@936 -- # '[' -z 2792605 ']' 00:18:09.857 20:49:34 -- common/autotest_common.sh@940 -- # kill -0 2792605 00:18:09.857 20:49:34 -- common/autotest_common.sh@941 -- # uname 00:18:09.857 20:49:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:09.857 20:49:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2792605 00:18:09.857 20:49:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:09.857 20:49:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:09.857 20:49:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2792605' 00:18:09.857 killing process with pid 2792605 00:18:09.857 20:49:34 -- common/autotest_common.sh@955 -- # kill 2792605 00:18:09.857 Received shutdown signal, test time was about 10.000000 seconds 00:18:09.857 00:18:09.857 Latency(us) 00:18:09.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:09.857 =================================================================================================================== 00:18:09.857 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:09.857 [2024-04-24 20:49:34.349553] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:09.857 20:49:34 -- common/autotest_common.sh@960 -- # wait 2792605 00:18:09.857 20:49:34 -- target/tls.sh@37 -- # return 1 00:18:09.857 20:49:34 -- common/autotest_common.sh@641 -- # es=1 00:18:09.857 20:49:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:09.857 20:49:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:09.857 20:49:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:09.857 20:49:34 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XeyK83IiMD 00:18:09.857 20:49:34 -- common/autotest_common.sh@638 -- # local es=0 00:18:09.857 20:49:34 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XeyK83IiMD 00:18:09.857 20:49:34 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:09.857 20:49:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:09.857 20:49:34 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:09.857 20:49:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:09.857 20:49:34 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.XeyK83IiMD 00:18:09.857 20:49:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:09.857 20:49:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:09.857 20:49:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:09.857 20:49:34 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XeyK83IiMD' 00:18:09.857 20:49:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.857 20:49:34 -- target/tls.sh@28 -- # bdevperf_pid=2792625 00:18:09.857 20:49:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:09.857 20:49:34 -- target/tls.sh@31 -- # waitforlisten 2792625 /var/tmp/bdevperf.sock 00:18:09.857 20:49:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:09.857 20:49:34 -- common/autotest_common.sh@817 -- # '[' -z 2792625 ']' 00:18:09.857 20:49:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.857 20:49:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:09.857 20:49:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.857 20:49:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:09.857 20:49:34 -- common/autotest_common.sh@10 -- # set +x 00:18:10.117 [2024-04-24 20:49:34.506097] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:10.117 [2024-04-24 20:49:34.506148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792625 ] 00:18:10.117 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.117 [2024-04-24 20:49:34.556560] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.117 [2024-04-24 20:49:34.607498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.117 20:49:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:10.117 20:49:34 -- common/autotest_common.sh@850 -- # return 0 00:18:10.117 20:49:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XeyK83IiMD 00:18:10.376 [2024-04-24 20:49:34.875272] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.376 [2024-04-24 20:49:34.875341] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:10.376 [2024-04-24 20:49:34.881908] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:10.376 [2024-04-24 20:49:34.881930] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:10.376 [2024-04-24 20:49:34.881955] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:10.376 [2024-04-24 20:49:34.882250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105d020 (107): Transport endpoint is not connected 00:18:10.376 [2024-04-24 20:49:34.883244] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x105d020 (9): Bad file descriptor 00:18:10.376 [2024-04-24 20:49:34.884246] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:10.376 [2024-04-24 20:49:34.884257] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:10.376 [2024-04-24 20:49:34.884262] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:10.376 request: 00:18:10.376 { 00:18:10.376 "name": "TLSTEST", 00:18:10.376 "trtype": "tcp", 00:18:10.376 "traddr": "10.0.0.2", 00:18:10.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.376 "adrfam": "ipv4", 00:18:10.376 "trsvcid": "4420", 00:18:10.376 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:10.376 "psk": "/tmp/tmp.XeyK83IiMD", 00:18:10.376 "method": "bdev_nvme_attach_controller", 00:18:10.376 "req_id": 1 00:18:10.376 } 00:18:10.376 Got JSON-RPC error response 00:18:10.376 response: 00:18:10.376 { 00:18:10.376 "code": -32602, 00:18:10.376 "message": "Invalid parameters" 00:18:10.376 } 00:18:10.376 20:49:34 -- target/tls.sh@36 -- # killprocess 2792625 00:18:10.376 20:49:34 -- common/autotest_common.sh@936 -- # '[' -z 2792625 ']' 00:18:10.376 20:49:34 -- common/autotest_common.sh@940 -- # kill -0 2792625 00:18:10.376 20:49:34 -- common/autotest_common.sh@941 -- # uname 00:18:10.376 20:49:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:10.376 20:49:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2792625 00:18:10.376 20:49:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:10.376 20:49:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:10.376 20:49:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2792625' 00:18:10.376 killing process with pid 2792625 00:18:10.376 20:49:34 -- common/autotest_common.sh@955 -- # kill 2792625 00:18:10.376 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.376 00:18:10.376 Latency(us) 00:18:10.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.376 =================================================================================================================== 00:18:10.376 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.376 [2024-04-24 20:49:34.973392] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:10.376 20:49:34 -- common/autotest_common.sh@960 -- # wait 2792625 00:18:10.637 20:49:35 -- target/tls.sh@37 -- # return 1 00:18:10.637 20:49:35 -- common/autotest_common.sh@641 -- # es=1 00:18:10.637 20:49:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:10.637 20:49:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:10.637 20:49:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:10.637 20:49:35 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.637 20:49:35 -- common/autotest_common.sh@638 -- # local es=0 00:18:10.637 20:49:35 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.637 20:49:35 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:10.637 20:49:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:10.637 20:49:35 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:10.637 20:49:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:10.637 20:49:35 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:10.637 20:49:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:10.637 20:49:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:10.637 20:49:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:10.637 20:49:35 -- target/tls.sh@23 -- # psk= 00:18:10.637 20:49:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:10.637 20:49:35 -- target/tls.sh@28 -- # bdevperf_pid=2792889 00:18:10.637 20:49:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:10.637 20:49:35 -- target/tls.sh@31 -- # waitforlisten 2792889 /var/tmp/bdevperf.sock 00:18:10.637 20:49:35 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:10.637 20:49:35 -- common/autotest_common.sh@817 -- # '[' -z 2792889 ']' 00:18:10.637 20:49:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:10.637 20:49:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:10.637 20:49:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:10.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:10.637 20:49:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:10.637 20:49:35 -- common/autotest_common.sh@10 -- # set +x 00:18:10.637 [2024-04-24 20:49:35.128702] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:10.637 [2024-04-24 20:49:35.128758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792889 ] 00:18:10.637 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.637 [2024-04-24 20:49:35.178529] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.637 [2024-04-24 20:49:35.229079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.898 20:49:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:10.898 20:49:35 -- common/autotest_common.sh@850 -- # return 0 00:18:10.898 20:49:35 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:10.898 [2024-04-24 20:49:35.503437] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:10.898 [2024-04-24 20:49:35.505376] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d7b00 (9): Bad file descriptor 00:18:10.898 [2024-04-24 20:49:35.506375] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:10.898 [2024-04-24 20:49:35.506383] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:10.898 [2024-04-24 20:49:35.506388] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:10.898 request: 00:18:10.898 { 00:18:10.898 "name": "TLSTEST", 00:18:10.898 "trtype": "tcp", 00:18:10.898 "traddr": "10.0.0.2", 00:18:10.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.898 "adrfam": "ipv4", 00:18:10.898 "trsvcid": "4420", 00:18:10.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.898 "method": "bdev_nvme_attach_controller", 00:18:10.898 "req_id": 1 00:18:10.898 } 00:18:10.898 Got JSON-RPC error response 00:18:10.898 response: 00:18:10.898 { 00:18:10.898 "code": -32602, 00:18:10.898 "message": "Invalid parameters" 00:18:10.898 } 00:18:10.898 20:49:35 -- target/tls.sh@36 -- # killprocess 2792889 00:18:10.898 20:49:35 -- common/autotest_common.sh@936 -- # '[' -z 2792889 ']' 00:18:10.898 20:49:35 -- common/autotest_common.sh@940 -- # kill -0 2792889 00:18:10.898 20:49:35 -- common/autotest_common.sh@941 -- # uname 00:18:11.159 20:49:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:11.159 20:49:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2792889 00:18:11.159 20:49:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:11.159 20:49:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:11.159 20:49:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2792889' 00:18:11.159 killing process with pid 2792889 00:18:11.159 20:49:35 -- common/autotest_common.sh@955 -- # kill 2792889 00:18:11.159 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.159 00:18:11.159 Latency(us) 00:18:11.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.159 =================================================================================================================== 00:18:11.159 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.159 20:49:35 -- common/autotest_common.sh@960 -- # wait 2792889 00:18:11.159 20:49:35 -- target/tls.sh@37 -- # return 1 00:18:11.159 20:49:35 -- common/autotest_common.sh@641 -- # es=1 00:18:11.159 20:49:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:11.159 20:49:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:11.159 20:49:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:11.159 20:49:35 -- target/tls.sh@158 -- # killprocess 2787362 00:18:11.159 20:49:35 -- common/autotest_common.sh@936 -- # '[' -z 2787362 ']' 00:18:11.159 20:49:35 -- common/autotest_common.sh@940 -- # kill -0 2787362 00:18:11.159 20:49:35 -- common/autotest_common.sh@941 -- # uname 00:18:11.159 20:49:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:11.159 20:49:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2787362 00:18:11.159 20:49:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:11.159 20:49:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:11.159 20:49:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2787362' 00:18:11.159 killing process with pid 2787362 00:18:11.159 20:49:35 -- common/autotest_common.sh@955 -- # kill 2787362 00:18:11.159 [2024-04-24 20:49:35.749516] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:11.159 20:49:35 -- common/autotest_common.sh@960 -- # wait 2787362 00:18:11.419 20:49:35 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:11.419 20:49:35 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:11.419 20:49:35 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:11.419 20:49:35 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:11.419 20:49:35 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:11.419 20:49:35 -- nvmf/common.sh@693 -- # digest=2 00:18:11.419 20:49:35 -- nvmf/common.sh@694 -- # python - 00:18:11.419 20:49:35 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:11.419 20:49:35 -- target/tls.sh@160 -- # mktemp 00:18:11.419 20:49:35 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.a5cbY8fw61 00:18:11.419 20:49:35 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:11.419 20:49:35 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.a5cbY8fw61 00:18:11.419 20:49:35 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:11.419 20:49:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:11.419 20:49:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:11.419 20:49:35 -- common/autotest_common.sh@10 -- # set +x 00:18:11.419 20:49:35 -- nvmf/common.sh@470 -- # nvmfpid=2792985 00:18:11.419 20:49:35 -- nvmf/common.sh@471 -- # waitforlisten 2792985 00:18:11.419 20:49:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.419 20:49:35 -- common/autotest_common.sh@817 -- # '[' -z 2792985 ']' 00:18:11.419 20:49:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.419 20:49:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:11.419 20:49:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.419 20:49:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:11.419 20:49:35 -- common/autotest_common.sh@10 -- # set +x 00:18:11.419 [2024-04-24 20:49:36.005824] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:11.419 [2024-04-24 20:49:36.005876] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.419 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.680 [2024-04-24 20:49:36.073630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.680 [2024-04-24 20:49:36.135029] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.680 [2024-04-24 20:49:36.135066] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.680 [2024-04-24 20:49:36.135074] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.680 [2024-04-24 20:49:36.135081] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.680 [2024-04-24 20:49:36.135086] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.680 [2024-04-24 20:49:36.135113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.680 20:49:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:11.680 20:49:36 -- common/autotest_common.sh@850 -- # return 0 00:18:11.680 20:49:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:11.680 20:49:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:11.680 20:49:36 -- common/autotest_common.sh@10 -- # set +x 00:18:11.680 20:49:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.680 20:49:36 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.a5cbY8fw61 00:18:11.680 20:49:36 -- target/tls.sh@49 -- # local key=/tmp/tmp.a5cbY8fw61 00:18:11.680 20:49:36 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:11.983 [2024-04-24 20:49:36.436274] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.983 20:49:36 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:12.277 20:49:36 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:12.277 [2024-04-24 20:49:36.837293] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:12.277 [2024-04-24 20:49:36.837509] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.277 20:49:36 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:12.538 malloc0 00:18:12.538 20:49:37 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:12.798 20:49:37 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a5cbY8fw61 00:18:12.799 [2024-04-24 20:49:37.425632] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:13.059 20:49:37 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.a5cbY8fw61 00:18:13.059 20:49:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.059 20:49:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:13.059 20:49:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.059 20:49:37 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.a5cbY8fw61' 00:18:13.059 20:49:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.059 20:49:37 -- target/tls.sh@28 -- # bdevperf_pid=2793340 00:18:13.059 20:49:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.059 20:49:37 -- target/tls.sh@31 -- # waitforlisten 2793340 /var/tmp/bdevperf.sock 00:18:13.059 20:49:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.059 20:49:37 -- common/autotest_common.sh@817 -- # '[' -z 2793340 ']' 00:18:13.059 20:49:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.059 20:49:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:13.059 20:49:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.059 20:49:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:13.059 20:49:37 -- common/autotest_common.sh@10 -- # set +x 00:18:13.059 [2024-04-24 20:49:37.489156] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:13.060 [2024-04-24 20:49:37.489204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2793340 ] 00:18:13.060 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.060 [2024-04-24 20:49:37.537065] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.060 [2024-04-24 20:49:37.587926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.060 20:49:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:13.060 20:49:37 -- common/autotest_common.sh@850 -- # return 0 00:18:13.060 20:49:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a5cbY8fw61 00:18:13.320 [2024-04-24 20:49:37.807367] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.320 [2024-04-24 20:49:37.807424] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:13.320 TLSTESTn1 00:18:13.320 20:49:37 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:13.580 Running I/O for 10 seconds... 00:18:23.605 00:18:23.605 Latency(us) 00:18:23.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.605 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:23.605 Verification LBA range: start 0x0 length 0x2000 00:18:23.605 TLSTESTn1 : 10.03 3574.10 13.96 0.00 0.00 35749.35 6335.15 225443.84 00:18:23.605 =================================================================================================================== 00:18:23.605 Total : 3574.10 13.96 0.00 0.00 35749.35 6335.15 225443.84 00:18:23.605 0 00:18:23.605 20:49:48 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:23.605 20:49:48 -- target/tls.sh@45 -- # killprocess 2793340 00:18:23.605 20:49:48 -- common/autotest_common.sh@936 -- # '[' -z 2793340 ']' 00:18:23.605 20:49:48 -- common/autotest_common.sh@940 -- # kill -0 2793340 00:18:23.605 20:49:48 -- common/autotest_common.sh@941 -- # uname 00:18:23.605 20:49:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:23.605 20:49:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2793340 00:18:23.605 20:49:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:23.605 20:49:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:23.605 20:49:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2793340' 00:18:23.605 killing process with pid 2793340 00:18:23.605 20:49:48 -- common/autotest_common.sh@955 -- # kill 2793340 00:18:23.605 Received shutdown signal, test time was about 10.000000 seconds 00:18:23.605 00:18:23.605 Latency(us) 00:18:23.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.605 =================================================================================================================== 00:18:23.605 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.605 [2024-04-24 20:49:48.145047] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:23.605 20:49:48 -- common/autotest_common.sh@960 -- # wait 2793340 00:18:23.865 20:49:48 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.a5cbY8fw61 00:18:23.865 20:49:48 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.a5cbY8fw61 00:18:23.865 20:49:48 -- common/autotest_common.sh@638 -- # local es=0 00:18:23.865 20:49:48 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.a5cbY8fw61 00:18:23.865 20:49:48 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:18:23.865 20:49:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:23.865 20:49:48 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:18:23.865 20:49:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:23.865 20:49:48 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.a5cbY8fw61 00:18:23.865 20:49:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:23.865 20:49:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:23.865 20:49:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:23.865 20:49:48 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.a5cbY8fw61' 00:18:23.865 20:49:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:23.865 20:49:48 -- target/tls.sh@28 -- # bdevperf_pid=2795358 00:18:23.865 20:49:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.865 20:49:48 -- target/tls.sh@31 -- # waitforlisten 2795358 /var/tmp/bdevperf.sock 00:18:23.865 20:49:48 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:23.865 20:49:48 -- common/autotest_common.sh@817 -- # '[' -z 2795358 ']' 00:18:23.865 20:49:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:23.865 20:49:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:23.865 20:49:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:23.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:23.865 20:49:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:23.865 20:49:48 -- common/autotest_common.sh@10 -- # set +x 00:18:23.865 [2024-04-24 20:49:48.311285] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:23.865 [2024-04-24 20:49:48.311337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2795358 ] 00:18:23.865 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.865 [2024-04-24 20:49:48.361236] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.865 [2024-04-24 20:49:48.410113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.865 20:49:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:23.865 20:49:48 -- common/autotest_common.sh@850 -- # return 0 00:18:23.865 20:49:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a5cbY8fw61 00:18:24.125 [2024-04-24 20:49:48.678556] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.125 [2024-04-24 20:49:48.678603] bdev_nvme.c:6067:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:24.125 [2024-04-24 20:49:48.678608] bdev_nvme.c:6176:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.a5cbY8fw61 00:18:24.125 request: 00:18:24.125 { 00:18:24.125 "name": "TLSTEST", 00:18:24.125 "trtype": "tcp", 00:18:24.125 "traddr": "10.0.0.2", 00:18:24.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:24.125 "adrfam": "ipv4", 00:18:24.125 "trsvcid": "4420", 00:18:24.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:24.125 "psk": "/tmp/tmp.a5cbY8fw61", 00:18:24.125 "method": "bdev_nvme_attach_controller", 00:18:24.125 "req_id": 1 00:18:24.125 } 00:18:24.125 Got JSON-RPC error response 00:18:24.125 response: 00:18:24.125 { 00:18:24.125 "code": -1, 00:18:24.125 "message": "Operation not permitted" 00:18:24.125 } 00:18:24.125 20:49:48 -- target/tls.sh@36 -- # killprocess 2795358 00:18:24.125 20:49:48 -- common/autotest_common.sh@936 -- # '[' -z 2795358 ']' 00:18:24.125 20:49:48 -- common/autotest_common.sh@940 -- # kill -0 2795358 00:18:24.125 20:49:48 -- common/autotest_common.sh@941 -- # uname 00:18:24.125 20:49:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:24.125 20:49:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2795358 00:18:24.125 20:49:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:24.385 20:49:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:24.385 20:49:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2795358' 00:18:24.385 killing process with pid 2795358 00:18:24.385 20:49:48 -- common/autotest_common.sh@955 -- # kill 2795358 00:18:24.385 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.385 00:18:24.385 Latency(us) 00:18:24.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.385 =================================================================================================================== 00:18:24.385 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:24.385 20:49:48 -- common/autotest_common.sh@960 -- # wait 2795358 00:18:24.385 20:49:48 -- target/tls.sh@37 -- # return 1 00:18:24.385 20:49:48 -- common/autotest_common.sh@641 -- # es=1 00:18:24.385 20:49:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:24.385 20:49:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:24.385 20:49:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:24.385 20:49:48 -- target/tls.sh@174 -- # killprocess 2792985 00:18:24.385 20:49:48 -- common/autotest_common.sh@936 -- # '[' -z 2792985 ']' 00:18:24.385 20:49:48 -- common/autotest_common.sh@940 -- # kill -0 2792985 00:18:24.385 20:49:48 -- common/autotest_common.sh@941 -- # uname 00:18:24.385 20:49:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:24.385 20:49:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2792985 00:18:24.385 20:49:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:24.385 20:49:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:24.385 20:49:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2792985' 00:18:24.385 killing process with pid 2792985 00:18:24.385 20:49:48 -- common/autotest_common.sh@955 -- # kill 2792985 00:18:24.385 [2024-04-24 20:49:48.928310] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:24.385 20:49:48 -- common/autotest_common.sh@960 -- # wait 2792985 00:18:24.644 20:49:49 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:24.644 20:49:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:24.644 20:49:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:24.644 20:49:49 -- common/autotest_common.sh@10 -- # set +x 00:18:24.644 20:49:49 -- nvmf/common.sh@470 -- # nvmfpid=2795692 00:18:24.644 20:49:49 -- nvmf/common.sh@471 -- # waitforlisten 2795692 00:18:24.644 20:49:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:24.644 20:49:49 -- common/autotest_common.sh@817 -- # '[' -z 2795692 ']' 00:18:24.644 20:49:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.644 20:49:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:24.644 20:49:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.644 20:49:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:24.644 20:49:49 -- common/autotest_common.sh@10 -- # set +x 00:18:24.644 [2024-04-24 20:49:49.123669] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:24.644 [2024-04-24 20:49:49.123730] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.644 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.644 [2024-04-24 20:49:49.186919] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.644 [2024-04-24 20:49:49.249867] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.644 [2024-04-24 20:49:49.249900] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.644 [2024-04-24 20:49:49.249907] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.644 [2024-04-24 20:49:49.249913] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.644 [2024-04-24 20:49:49.249919] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.644 [2024-04-24 20:49:49.249943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.904 20:49:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:24.904 20:49:49 -- common/autotest_common.sh@850 -- # return 0 00:18:24.904 20:49:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:24.904 20:49:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:24.904 20:49:49 -- common/autotest_common.sh@10 -- # set +x 00:18:24.904 20:49:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.904 20:49:49 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.a5cbY8fw61 00:18:24.904 20:49:49 -- common/autotest_common.sh@638 -- # local es=0 00:18:24.904 20:49:49 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.a5cbY8fw61 00:18:24.904 20:49:49 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:18:24.904 20:49:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:24.904 20:49:49 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:18:24.904 20:49:49 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:24.904 20:49:49 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.a5cbY8fw61 00:18:24.904 20:49:49 -- target/tls.sh@49 -- # local key=/tmp/tmp.a5cbY8fw61 00:18:24.904 20:49:49 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:25.163 [2024-04-24 20:49:49.559272] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.163 20:49:49 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:25.163 20:49:49 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:25.423 [2024-04-24 20:49:49.960279] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:25.423 [2024-04-24 20:49:49.960488] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.423 20:49:49 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:25.682 malloc0 00:18:25.682 20:49:50 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:25.942 20:49:50 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a5cbY8fw61 00:18:25.942 [2024-04-24 20:49:50.552473] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:25.942 [2024-04-24 20:49:50.552496] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:25.942 [2024-04-24 20:49:50.552519] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:18:25.942 request: 00:18:25.942 { 00:18:25.942 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.942 "host": "nqn.2016-06.io.spdk:host1", 00:18:25.942 "psk": "/tmp/tmp.a5cbY8fw61", 00:18:25.942 "method": "nvmf_subsystem_add_host", 00:18:25.942 "req_id": 1 00:18:25.942 } 00:18:25.942 Got JSON-RPC error response 00:18:25.942 response: 00:18:25.942 { 00:18:25.942 "code": -32603, 00:18:25.942 "message": "Internal error" 00:18:25.942 } 00:18:25.942 20:49:50 -- common/autotest_common.sh@641 -- # es=1 00:18:25.942 20:49:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:25.942 20:49:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:25.942 20:49:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:25.942 20:49:50 -- target/tls.sh@180 -- # killprocess 2795692 00:18:25.942 20:49:50 -- common/autotest_common.sh@936 -- # '[' -z 2795692 ']' 00:18:25.942 20:49:50 -- common/autotest_common.sh@940 -- # kill -0 2795692 00:18:25.942 20:49:50 -- common/autotest_common.sh@941 -- # uname 00:18:25.942 20:49:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:25.942 20:49:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2795692 00:18:26.202 20:49:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:26.202 20:49:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:26.202 20:49:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2795692' 00:18:26.202 killing process with pid 2795692 00:18:26.202 20:49:50 -- common/autotest_common.sh@955 -- # kill 2795692 00:18:26.202 20:49:50 -- common/autotest_common.sh@960 -- # wait 2795692 00:18:26.202 20:49:50 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.a5cbY8fw61 00:18:26.202 20:49:50 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:26.202 20:49:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:26.202 20:49:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:26.202 20:49:50 -- common/autotest_common.sh@10 -- # set +x 00:18:26.202 20:49:50 -- nvmf/common.sh@470 -- # nvmfpid=2796050 00:18:26.202 20:49:50 -- nvmf/common.sh@471 -- # waitforlisten 2796050 00:18:26.202 20:49:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:26.202 20:49:50 -- common/autotest_common.sh@817 -- # '[' -z 2796050 ']' 00:18:26.202 20:49:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.202 20:49:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:26.202 20:49:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.202 20:49:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:26.202 20:49:50 -- common/autotest_common.sh@10 -- # set +x 00:18:26.202 [2024-04-24 20:49:50.837619] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:26.202 [2024-04-24 20:49:50.837691] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.462 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.462 [2024-04-24 20:49:50.901639] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.462 [2024-04-24 20:49:50.966162] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.462 [2024-04-24 20:49:50.966196] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.462 [2024-04-24 20:49:50.966203] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.462 [2024-04-24 20:49:50.966210] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.462 [2024-04-24 20:49:50.966215] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.462 [2024-04-24 20:49:50.966233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.399 20:49:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:27.400 20:49:51 -- common/autotest_common.sh@850 -- # return 0 00:18:27.400 20:49:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:27.400 20:49:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:27.400 20:49:51 -- common/autotest_common.sh@10 -- # set +x 00:18:27.400 20:49:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.400 20:49:51 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.a5cbY8fw61 00:18:27.400 20:49:51 -- target/tls.sh@49 -- # local key=/tmp/tmp.a5cbY8fw61 00:18:27.400 20:49:51 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:27.400 [2024-04-24 20:49:51.897305] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.400 20:49:51 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:27.659 20:49:52 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:27.659 [2024-04-24 20:49:52.298323] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:27.659 [2024-04-24 20:49:52.298526] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.919 20:49:52 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:27.919 malloc0 00:18:27.919 20:49:52 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:28.179 20:49:52 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a5cbY8fw61 00:18:28.438 [2024-04-24 20:49:52.870524] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:28.438 20:49:52 -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:28.438 20:49:52 -- target/tls.sh@188 -- # bdevperf_pid=2796423 00:18:28.438 20:49:52 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.438 20:49:52 -- target/tls.sh@191 -- # waitforlisten 2796423 /var/tmp/bdevperf.sock 00:18:28.438 20:49:52 -- common/autotest_common.sh@817 -- # '[' -z 2796423 ']' 00:18:28.439 20:49:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.439 20:49:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:28.439 20:49:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.439 20:49:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:28.439 20:49:52 -- common/autotest_common.sh@10 -- # set +x 00:18:28.439 [2024-04-24 20:49:52.913722] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:28.439 [2024-04-24 20:49:52.913774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796423 ] 00:18:28.439 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.439 [2024-04-24 20:49:52.961861] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.439 [2024-04-24 20:49:53.012565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.698 20:49:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:28.698 20:49:53 -- common/autotest_common.sh@850 -- # return 0 00:18:28.698 20:49:53 -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a5cbY8fw61 00:18:28.698 [2024-04-24 20:49:53.292420] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.698 [2024-04-24 20:49:53.292481] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:28.958 TLSTESTn1 00:18:28.958 20:49:53 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:29.218 20:49:53 -- target/tls.sh@196 -- # tgtconf='{ 00:18:29.218 "subsystems": [ 00:18:29.218 { 00:18:29.218 "subsystem": "keyring", 00:18:29.218 "config": [] 00:18:29.218 }, 00:18:29.218 { 00:18:29.218 "subsystem": "iobuf", 00:18:29.218 "config": [ 00:18:29.218 { 00:18:29.218 "method": "iobuf_set_options", 00:18:29.218 "params": { 00:18:29.218 "small_pool_count": 8192, 00:18:29.218 "large_pool_count": 1024, 00:18:29.218 "small_bufsize": 8192, 00:18:29.218 "large_bufsize": 135168 00:18:29.218 } 00:18:29.218 } 00:18:29.218 ] 00:18:29.218 }, 00:18:29.218 { 00:18:29.218 "subsystem": "sock", 00:18:29.218 "config": [ 00:18:29.218 { 00:18:29.218 "method": "sock_impl_set_options", 00:18:29.218 "params": { 00:18:29.218 "impl_name": "posix", 00:18:29.218 "recv_buf_size": 2097152, 00:18:29.218 "send_buf_size": 2097152, 00:18:29.218 "enable_recv_pipe": true, 00:18:29.218 "enable_quickack": false, 00:18:29.218 "enable_placement_id": 0, 00:18:29.218 "enable_zerocopy_send_server": true, 00:18:29.218 "enable_zerocopy_send_client": false, 00:18:29.218 "zerocopy_threshold": 0, 00:18:29.218 "tls_version": 0, 00:18:29.218 "enable_ktls": false 00:18:29.218 } 00:18:29.218 }, 00:18:29.218 { 00:18:29.218 "method": "sock_impl_set_options", 00:18:29.218 "params": { 00:18:29.218 "impl_name": "ssl", 00:18:29.218 "recv_buf_size": 4096, 00:18:29.218 "send_buf_size": 4096, 00:18:29.218 "enable_recv_pipe": true, 00:18:29.218 "enable_quickack": false, 00:18:29.218 "enable_placement_id": 0, 00:18:29.218 "enable_zerocopy_send_server": true, 00:18:29.218 "enable_zerocopy_send_client": false, 00:18:29.218 "zerocopy_threshold": 0, 00:18:29.218 "tls_version": 0, 00:18:29.218 "enable_ktls": false 00:18:29.218 } 00:18:29.218 } 00:18:29.218 ] 00:18:29.218 }, 00:18:29.218 { 00:18:29.218 "subsystem": "vmd", 00:18:29.218 "config": [] 00:18:29.218 }, 00:18:29.218 { 00:18:29.218 "subsystem": "accel", 00:18:29.218 "config": [ 00:18:29.218 { 00:18:29.218 "method": "accel_set_options", 00:18:29.218 "params": { 00:18:29.218 "small_cache_size": 128, 00:18:29.218 "large_cache_size": 16, 00:18:29.218 "task_count": 2048, 00:18:29.218 "sequence_count": 2048, 00:18:29.218 "buf_count": 2048 00:18:29.218 } 00:18:29.218 } 00:18:29.218 ] 00:18:29.218 }, 00:18:29.218 { 00:18:29.218 "subsystem": "bdev", 00:18:29.218 "config": [ 00:18:29.218 { 00:18:29.219 "method": "bdev_set_options", 00:18:29.219 "params": { 00:18:29.219 "bdev_io_pool_size": 65535, 00:18:29.219 "bdev_io_cache_size": 256, 00:18:29.219 "bdev_auto_examine": true, 00:18:29.219 "iobuf_small_cache_size": 128, 00:18:29.219 "iobuf_large_cache_size": 16 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "bdev_raid_set_options", 00:18:29.219 "params": { 00:18:29.219 "process_window_size_kb": 1024 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "bdev_iscsi_set_options", 00:18:29.219 "params": { 00:18:29.219 "timeout_sec": 30 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "bdev_nvme_set_options", 00:18:29.219 "params": { 00:18:29.219 "action_on_timeout": "none", 00:18:29.219 "timeout_us": 0, 00:18:29.219 "timeout_admin_us": 0, 00:18:29.219 "keep_alive_timeout_ms": 10000, 00:18:29.219 "arbitration_burst": 0, 00:18:29.219 "low_priority_weight": 0, 00:18:29.219 "medium_priority_weight": 0, 00:18:29.219 "high_priority_weight": 0, 00:18:29.219 "nvme_adminq_poll_period_us": 10000, 00:18:29.219 "nvme_ioq_poll_period_us": 0, 00:18:29.219 "io_queue_requests": 0, 00:18:29.219 "delay_cmd_submit": true, 00:18:29.219 "transport_retry_count": 4, 00:18:29.219 "bdev_retry_count": 3, 00:18:29.219 "transport_ack_timeout": 0, 00:18:29.219 "ctrlr_loss_timeout_sec": 0, 00:18:29.219 "reconnect_delay_sec": 0, 00:18:29.219 "fast_io_fail_timeout_sec": 0, 00:18:29.219 "disable_auto_failback": false, 00:18:29.219 "generate_uuids": false, 00:18:29.219 "transport_tos": 0, 00:18:29.219 "nvme_error_stat": false, 00:18:29.219 "rdma_srq_size": 0, 00:18:29.219 "io_path_stat": false, 00:18:29.219 "allow_accel_sequence": false, 00:18:29.219 "rdma_max_cq_size": 0, 00:18:29.219 "rdma_cm_event_timeout_ms": 0, 00:18:29.219 "dhchap_digests": [ 00:18:29.219 "sha256", 00:18:29.219 "sha384", 00:18:29.219 "sha512" 00:18:29.219 ], 00:18:29.219 "dhchap_dhgroups": [ 00:18:29.219 "null", 00:18:29.219 "ffdhe2048", 00:18:29.219 "ffdhe3072", 00:18:29.219 "ffdhe4096", 00:18:29.219 "ffdhe6144", 00:18:29.219 "ffdhe8192" 00:18:29.219 ] 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "bdev_nvme_set_hotplug", 00:18:29.219 "params": { 00:18:29.219 "period_us": 100000, 00:18:29.219 "enable": false 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "bdev_malloc_create", 00:18:29.219 "params": { 00:18:29.219 "name": "malloc0", 00:18:29.219 "num_blocks": 8192, 00:18:29.219 "block_size": 4096, 00:18:29.219 "physical_block_size": 4096, 00:18:29.219 "uuid": "93fc9ecb-3f67-44ed-af22-fe400a2ea90a", 00:18:29.219 "optimal_io_boundary": 0 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "bdev_wait_for_examine" 00:18:29.219 } 00:18:29.219 ] 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "subsystem": "nbd", 00:18:29.219 "config": [] 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "subsystem": "scheduler", 00:18:29.219 "config": [ 00:18:29.219 { 00:18:29.219 "method": "framework_set_scheduler", 00:18:29.219 "params": { 00:18:29.219 "name": "static" 00:18:29.219 } 00:18:29.219 } 00:18:29.219 ] 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "subsystem": "nvmf", 00:18:29.219 "config": [ 00:18:29.219 { 00:18:29.219 "method": "nvmf_set_config", 00:18:29.219 "params": { 00:18:29.219 "discovery_filter": "match_any", 00:18:29.219 "admin_cmd_passthru": { 00:18:29.219 "identify_ctrlr": false 00:18:29.219 } 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "nvmf_set_max_subsystems", 00:18:29.219 "params": { 00:18:29.219 "max_subsystems": 1024 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "nvmf_set_crdt", 00:18:29.219 "params": { 00:18:29.219 "crdt1": 0, 00:18:29.219 "crdt2": 0, 00:18:29.219 "crdt3": 0 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "nvmf_create_transport", 00:18:29.219 "params": { 00:18:29.219 "trtype": "TCP", 00:18:29.219 "max_queue_depth": 128, 00:18:29.219 "max_io_qpairs_per_ctrlr": 127, 00:18:29.219 "in_capsule_data_size": 4096, 00:18:29.219 "max_io_size": 131072, 00:18:29.219 "io_unit_size": 131072, 00:18:29.219 "max_aq_depth": 128, 00:18:29.219 "num_shared_buffers": 511, 00:18:29.219 "buf_cache_size": 4294967295, 00:18:29.219 "dif_insert_or_strip": false, 00:18:29.219 "zcopy": false, 00:18:29.219 "c2h_success": false, 00:18:29.219 "sock_priority": 0, 00:18:29.219 "abort_timeout_sec": 1, 00:18:29.219 "ack_timeout": 0, 00:18:29.219 "data_wr_pool_size": 0 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "nvmf_create_subsystem", 00:18:29.219 "params": { 00:18:29.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.219 "allow_any_host": false, 00:18:29.219 "serial_number": "SPDK00000000000001", 00:18:29.219 "model_number": "SPDK bdev Controller", 00:18:29.219 "max_namespaces": 10, 00:18:29.219 "min_cntlid": 1, 00:18:29.219 "max_cntlid": 65519, 00:18:29.219 "ana_reporting": false 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "nvmf_subsystem_add_host", 00:18:29.219 "params": { 00:18:29.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.219 "host": "nqn.2016-06.io.spdk:host1", 00:18:29.219 "psk": "/tmp/tmp.a5cbY8fw61" 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "nvmf_subsystem_add_ns", 00:18:29.219 "params": { 00:18:29.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.219 "namespace": { 00:18:29.219 "nsid": 1, 00:18:29.219 "bdev_name": "malloc0", 00:18:29.219 "nguid": "93FC9ECB3F6744EDAF22FE400A2EA90A", 00:18:29.219 "uuid": "93fc9ecb-3f67-44ed-af22-fe400a2ea90a", 00:18:29.219 "no_auto_visible": false 00:18:29.219 } 00:18:29.219 } 00:18:29.219 }, 00:18:29.219 { 00:18:29.219 "method": "nvmf_subsystem_add_listener", 00:18:29.219 "params": { 00:18:29.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.219 "listen_address": { 00:18:29.219 "trtype": "TCP", 00:18:29.219 "adrfam": "IPv4", 00:18:29.219 "traddr": "10.0.0.2", 00:18:29.219 "trsvcid": "4420" 00:18:29.219 }, 00:18:29.219 "secure_channel": true 00:18:29.219 } 00:18:29.219 } 00:18:29.219 ] 00:18:29.219 } 00:18:29.219 ] 00:18:29.219 }' 00:18:29.219 20:49:53 -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:29.480 20:49:53 -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:29.480 "subsystems": [ 00:18:29.480 { 00:18:29.480 "subsystem": "keyring", 00:18:29.480 "config": [] 00:18:29.480 }, 00:18:29.480 { 00:18:29.480 "subsystem": "iobuf", 00:18:29.480 "config": [ 00:18:29.480 { 00:18:29.480 "method": "iobuf_set_options", 00:18:29.480 "params": { 00:18:29.480 "small_pool_count": 8192, 00:18:29.480 "large_pool_count": 1024, 00:18:29.480 "small_bufsize": 8192, 00:18:29.480 "large_bufsize": 135168 00:18:29.480 } 00:18:29.480 } 00:18:29.480 ] 00:18:29.480 }, 00:18:29.480 { 00:18:29.480 "subsystem": "sock", 00:18:29.480 "config": [ 00:18:29.480 { 00:18:29.480 "method": "sock_impl_set_options", 00:18:29.480 "params": { 00:18:29.480 "impl_name": "posix", 00:18:29.480 "recv_buf_size": 2097152, 00:18:29.480 "send_buf_size": 2097152, 00:18:29.480 "enable_recv_pipe": true, 00:18:29.480 "enable_quickack": false, 00:18:29.480 "enable_placement_id": 0, 00:18:29.480 "enable_zerocopy_send_server": true, 00:18:29.480 "enable_zerocopy_send_client": false, 00:18:29.480 "zerocopy_threshold": 0, 00:18:29.480 "tls_version": 0, 00:18:29.480 "enable_ktls": false 00:18:29.480 } 00:18:29.480 }, 00:18:29.480 { 00:18:29.480 "method": "sock_impl_set_options", 00:18:29.480 "params": { 00:18:29.480 "impl_name": "ssl", 00:18:29.480 "recv_buf_size": 4096, 00:18:29.480 "send_buf_size": 4096, 00:18:29.480 "enable_recv_pipe": true, 00:18:29.480 "enable_quickack": false, 00:18:29.480 "enable_placement_id": 0, 00:18:29.480 "enable_zerocopy_send_server": true, 00:18:29.480 "enable_zerocopy_send_client": false, 00:18:29.480 "zerocopy_threshold": 0, 00:18:29.480 "tls_version": 0, 00:18:29.480 "enable_ktls": false 00:18:29.480 } 00:18:29.480 } 00:18:29.480 ] 00:18:29.480 }, 00:18:29.480 { 00:18:29.480 "subsystem": "vmd", 00:18:29.480 "config": [] 00:18:29.480 }, 00:18:29.480 { 00:18:29.480 "subsystem": "accel", 00:18:29.480 "config": [ 00:18:29.480 { 00:18:29.480 "method": "accel_set_options", 00:18:29.480 "params": { 00:18:29.480 "small_cache_size": 128, 00:18:29.480 "large_cache_size": 16, 00:18:29.480 "task_count": 2048, 00:18:29.480 "sequence_count": 2048, 00:18:29.480 "buf_count": 2048 00:18:29.480 } 00:18:29.480 } 00:18:29.480 ] 00:18:29.480 }, 00:18:29.480 { 00:18:29.480 "subsystem": "bdev", 00:18:29.480 "config": [ 00:18:29.480 { 00:18:29.480 "method": "bdev_set_options", 00:18:29.480 "params": { 00:18:29.480 "bdev_io_pool_size": 65535, 00:18:29.480 "bdev_io_cache_size": 256, 00:18:29.480 "bdev_auto_examine": true, 00:18:29.480 "iobuf_small_cache_size": 128, 00:18:29.480 "iobuf_large_cache_size": 16 00:18:29.480 } 00:18:29.480 }, 00:18:29.480 { 00:18:29.480 "method": "bdev_raid_set_options", 00:18:29.480 "params": { 00:18:29.480 "process_window_size_kb": 1024 00:18:29.480 } 00:18:29.480 }, 00:18:29.480 { 00:18:29.480 "method": "bdev_iscsi_set_options", 00:18:29.480 "params": { 00:18:29.480 "timeout_sec": 30 00:18:29.480 } 00:18:29.480 }, 00:18:29.480 { 00:18:29.480 "method": "bdev_nvme_set_options", 00:18:29.480 "params": { 00:18:29.480 "action_on_timeout": "none", 00:18:29.480 "timeout_us": 0, 00:18:29.480 "timeout_admin_us": 0, 00:18:29.480 "keep_alive_timeout_ms": 10000, 00:18:29.480 "arbitration_burst": 0, 00:18:29.480 "low_priority_weight": 0, 00:18:29.480 "medium_priority_weight": 0, 00:18:29.480 "high_priority_weight": 0, 00:18:29.480 "nvme_adminq_poll_period_us": 10000, 00:18:29.480 "nvme_ioq_poll_period_us": 0, 00:18:29.480 "io_queue_requests": 512, 00:18:29.480 "delay_cmd_submit": true, 00:18:29.480 "transport_retry_count": 4, 00:18:29.480 "bdev_retry_count": 3, 00:18:29.480 "transport_ack_timeout": 0, 00:18:29.480 "ctrlr_loss_timeout_sec": 0, 00:18:29.480 "reconnect_delay_sec": 0, 00:18:29.480 "fast_io_fail_timeout_sec": 0, 00:18:29.480 "disable_auto_failback": false, 00:18:29.480 "generate_uuids": false, 00:18:29.480 "transport_tos": 0, 00:18:29.480 "nvme_error_stat": false, 00:18:29.480 "rdma_srq_size": 0, 00:18:29.480 "io_path_stat": false, 00:18:29.480 "allow_accel_sequence": false, 00:18:29.480 "rdma_max_cq_size": 0, 00:18:29.480 "rdma_cm_event_timeout_ms": 0, 00:18:29.480 "dhchap_digests": [ 00:18:29.480 "sha256", 00:18:29.480 "sha384", 00:18:29.480 "sha512" 00:18:29.480 ], 00:18:29.480 "dhchap_dhgroups": [ 00:18:29.480 "null", 00:18:29.480 "ffdhe2048", 00:18:29.480 "ffdhe3072", 00:18:29.480 "ffdhe4096", 00:18:29.480 "ffdhe6144", 00:18:29.480 "ffdhe8192" 00:18:29.480 ] 00:18:29.480 } 00:18:29.480 }, 00:18:29.480 { 00:18:29.480 "method": "bdev_nvme_attach_controller", 00:18:29.480 "params": { 00:18:29.480 "name": "TLSTEST", 00:18:29.480 "trtype": "TCP", 00:18:29.480 "adrfam": "IPv4", 00:18:29.480 "traddr": "10.0.0.2", 00:18:29.480 "trsvcid": "4420", 00:18:29.480 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.480 "prchk_reftag": false, 00:18:29.480 "prchk_guard": false, 00:18:29.480 "ctrlr_loss_timeout_sec": 0, 00:18:29.480 "reconnect_delay_sec": 0, 00:18:29.481 "fast_io_fail_timeout_sec": 0, 00:18:29.481 "psk": "/tmp/tmp.a5cbY8fw61", 00:18:29.481 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:29.481 "hdgst": false, 00:18:29.481 "ddgst": false 00:18:29.481 } 00:18:29.481 }, 00:18:29.481 { 00:18:29.481 "method": "bdev_nvme_set_hotplug", 00:18:29.481 "params": { 00:18:29.481 "period_us": 100000, 00:18:29.481 "enable": false 00:18:29.481 } 00:18:29.481 }, 00:18:29.481 { 00:18:29.481 "method": "bdev_wait_for_examine" 00:18:29.481 } 00:18:29.481 ] 00:18:29.481 }, 00:18:29.481 { 00:18:29.481 "subsystem": "nbd", 00:18:29.481 "config": [] 00:18:29.481 } 00:18:29.481 ] 00:18:29.481 }' 00:18:29.481 20:49:53 -- target/tls.sh@199 -- # killprocess 2796423 00:18:29.481 20:49:53 -- common/autotest_common.sh@936 -- # '[' -z 2796423 ']' 00:18:29.481 20:49:53 -- common/autotest_common.sh@940 -- # kill -0 2796423 00:18:29.481 20:49:53 -- common/autotest_common.sh@941 -- # uname 00:18:29.481 20:49:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:29.481 20:49:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2796423 00:18:29.481 20:49:54 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:29.481 20:49:54 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:29.481 20:49:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2796423' 00:18:29.481 killing process with pid 2796423 00:18:29.481 20:49:54 -- common/autotest_common.sh@955 -- # kill 2796423 00:18:29.481 Received shutdown signal, test time was about 10.000000 seconds 00:18:29.481 00:18:29.481 Latency(us) 00:18:29.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.481 =================================================================================================================== 00:18:29.481 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:29.481 [2024-04-24 20:49:54.011966] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:29.481 20:49:54 -- common/autotest_common.sh@960 -- # wait 2796423 00:18:29.742 20:49:54 -- target/tls.sh@200 -- # killprocess 2796050 00:18:29.742 20:49:54 -- common/autotest_common.sh@936 -- # '[' -z 2796050 ']' 00:18:29.742 20:49:54 -- common/autotest_common.sh@940 -- # kill -0 2796050 00:18:29.742 20:49:54 -- common/autotest_common.sh@941 -- # uname 00:18:29.742 20:49:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:29.742 20:49:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2796050 00:18:29.742 20:49:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:29.742 20:49:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:29.742 20:49:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2796050' 00:18:29.742 killing process with pid 2796050 00:18:29.742 20:49:54 -- common/autotest_common.sh@955 -- # kill 2796050 00:18:29.742 [2024-04-24 20:49:54.180206] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:29.742 20:49:54 -- common/autotest_common.sh@960 -- # wait 2796050 00:18:29.742 20:49:54 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:29.742 20:49:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:29.742 20:49:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:29.742 20:49:54 -- common/autotest_common.sh@10 -- # set +x 00:18:29.742 20:49:54 -- target/tls.sh@203 -- # echo '{ 00:18:29.742 "subsystems": [ 00:18:29.742 { 00:18:29.742 "subsystem": "keyring", 00:18:29.742 "config": [] 00:18:29.742 }, 00:18:29.742 { 00:18:29.742 "subsystem": "iobuf", 00:18:29.742 "config": [ 00:18:29.742 { 00:18:29.742 "method": "iobuf_set_options", 00:18:29.742 "params": { 00:18:29.742 "small_pool_count": 8192, 00:18:29.742 "large_pool_count": 1024, 00:18:29.742 "small_bufsize": 8192, 00:18:29.742 "large_bufsize": 135168 00:18:29.742 } 00:18:29.742 } 00:18:29.742 ] 00:18:29.742 }, 00:18:29.742 { 00:18:29.742 "subsystem": "sock", 00:18:29.742 "config": [ 00:18:29.742 { 00:18:29.742 "method": "sock_impl_set_options", 00:18:29.742 "params": { 00:18:29.742 "impl_name": "posix", 00:18:29.742 "recv_buf_size": 2097152, 00:18:29.742 "send_buf_size": 2097152, 00:18:29.742 "enable_recv_pipe": true, 00:18:29.742 "enable_quickack": false, 00:18:29.742 "enable_placement_id": 0, 00:18:29.742 "enable_zerocopy_send_server": true, 00:18:29.742 "enable_zerocopy_send_client": false, 00:18:29.742 "zerocopy_threshold": 0, 00:18:29.742 "tls_version": 0, 00:18:29.742 "enable_ktls": false 00:18:29.742 } 00:18:29.742 }, 00:18:29.742 { 00:18:29.742 "method": "sock_impl_set_options", 00:18:29.742 "params": { 00:18:29.742 "impl_name": "ssl", 00:18:29.742 "recv_buf_size": 4096, 00:18:29.742 "send_buf_size": 4096, 00:18:29.742 "enable_recv_pipe": true, 00:18:29.742 "enable_quickack": false, 00:18:29.742 "enable_placement_id": 0, 00:18:29.742 "enable_zerocopy_send_server": true, 00:18:29.742 "enable_zerocopy_send_client": false, 00:18:29.742 "zerocopy_threshold": 0, 00:18:29.742 "tls_version": 0, 00:18:29.742 "enable_ktls": false 00:18:29.742 } 00:18:29.742 } 00:18:29.742 ] 00:18:29.742 }, 00:18:29.742 { 00:18:29.742 "subsystem": "vmd", 00:18:29.742 "config": [] 00:18:29.742 }, 00:18:29.742 { 00:18:29.742 "subsystem": "accel", 00:18:29.742 "config": [ 00:18:29.742 { 00:18:29.742 "method": "accel_set_options", 00:18:29.742 "params": { 00:18:29.742 "small_cache_size": 128, 00:18:29.742 "large_cache_size": 16, 00:18:29.742 "task_count": 2048, 00:18:29.742 "sequence_count": 2048, 00:18:29.742 "buf_count": 2048 00:18:29.742 } 00:18:29.742 } 00:18:29.742 ] 00:18:29.742 }, 00:18:29.742 { 00:18:29.742 "subsystem": "bdev", 00:18:29.742 "config": [ 00:18:29.742 { 00:18:29.742 "method": "bdev_set_options", 00:18:29.742 "params": { 00:18:29.742 "bdev_io_pool_size": 65535, 00:18:29.742 "bdev_io_cache_size": 256, 00:18:29.742 "bdev_auto_examine": true, 00:18:29.742 "iobuf_small_cache_size": 128, 00:18:29.742 "iobuf_large_cache_size": 16 00:18:29.742 } 00:18:29.742 }, 00:18:29.742 { 00:18:29.742 "method": "bdev_raid_set_options", 00:18:29.742 "params": { 00:18:29.742 "process_window_size_kb": 1024 00:18:29.742 } 00:18:29.742 }, 00:18:29.742 { 00:18:29.742 "method": "bdev_iscsi_set_options", 00:18:29.742 "params": { 00:18:29.742 "timeout_sec": 30 00:18:29.742 } 00:18:29.742 }, 00:18:29.742 { 00:18:29.742 "method": "bdev_nvme_set_options", 00:18:29.742 "params": { 00:18:29.742 "action_on_timeout": "none", 00:18:29.742 "timeout_us": 0, 00:18:29.742 "timeout_admin_us": 0, 00:18:29.742 "keep_alive_timeout_ms": 10000, 00:18:29.742 "arbitration_burst": 0, 00:18:29.742 "low_priority_weight": 0, 00:18:29.742 "medium_priority_weight": 0, 00:18:29.742 "high_priority_weight": 0, 00:18:29.742 "nvme_adminq_poll_period_us": 10000, 00:18:29.742 "nvme_ioq_poll_period_us": 0, 00:18:29.742 "io_queue_requests": 0, 00:18:29.742 "delay_cmd_submit": true, 00:18:29.742 "transport_retry_count": 4, 00:18:29.742 "bdev_retry_count": 3, 00:18:29.742 "transport_ack_timeout": 0, 00:18:29.742 "ctrlr_loss_timeout_sec": 0, 00:18:29.742 "reconnect_delay_sec": 0, 00:18:29.742 "fast_io_fail_timeout_sec": 0, 00:18:29.742 "disable_auto_failback": false, 00:18:29.742 "generate_uuids": false, 00:18:29.743 "transport_tos": 0, 00:18:29.743 "nvme_error_stat": false, 00:18:29.743 "rdma_srq_size": 0, 00:18:29.743 "io_path_stat": false, 00:18:29.743 "allow_accel_sequence": false, 00:18:29.743 "rdma_max_cq_size": 0, 00:18:29.743 "rdma_cm_event_timeout_ms": 0, 00:18:29.743 "dhchap_digests": [ 00:18:29.743 "sha256", 00:18:29.743 "sha384", 00:18:29.743 "sha512" 00:18:29.743 ], 00:18:29.743 "dhchap_dhgroups": [ 00:18:29.743 "null", 00:18:29.743 "ffdhe2048", 00:18:29.743 "ffdhe3072", 00:18:29.743 "ffdhe4096", 00:18:29.743 "ffdhe6144", 00:18:29.743 "ffdhe8192" 00:18:29.743 ] 00:18:29.743 } 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "method": "bdev_nvme_set_hotplug", 00:18:29.743 "params": { 00:18:29.743 "period_us": 100000, 00:18:29.743 "enable": false 00:18:29.743 } 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "method": "bdev_malloc_create", 00:18:29.743 "params": { 00:18:29.743 "name": "malloc0", 00:18:29.743 "num_blocks": 8192, 00:18:29.743 "block_size": 4096, 00:18:29.743 "physical_block_size": 4096, 00:18:29.743 "uuid": "93fc9ecb-3f67-44ed-af22-fe400a2ea90a", 00:18:29.743 "optimal_io_boundary": 0 00:18:29.743 } 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "method": "bdev_wait_for_examine" 00:18:29.743 } 00:18:29.743 ] 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "subsystem": "nbd", 00:18:29.743 "config": [] 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "subsystem": "scheduler", 00:18:29.743 "config": [ 00:18:29.743 { 00:18:29.743 "method": "framework_set_scheduler", 00:18:29.743 "params": { 00:18:29.743 "name": "static" 00:18:29.743 } 00:18:29.743 } 00:18:29.743 ] 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "subsystem": "nvmf", 00:18:29.743 "config": [ 00:18:29.743 { 00:18:29.743 "method": "nvmf_set_config", 00:18:29.743 "params": { 00:18:29.743 "discovery_filter": "match_any", 00:18:29.743 "admin_cmd_passthru": { 00:18:29.743 "identify_ctrlr": false 00:18:29.743 } 00:18:29.743 } 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "method": "nvmf_set_max_subsystems", 00:18:29.743 "params": { 00:18:29.743 "max_subsystems": 1024 00:18:29.743 } 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "method": "nvmf_set_crdt", 00:18:29.743 "params": { 00:18:29.743 "crdt1": 0, 00:18:29.743 "crdt2": 0, 00:18:29.743 "crdt3": 0 00:18:29.743 } 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "method": "nvmf_create_transport", 00:18:29.743 "params": { 00:18:29.743 "trtype": "TCP", 00:18:29.743 "max_queue_depth": 128, 00:18:29.743 "max_io_qpairs_per_ctrlr": 127, 00:18:29.743 "in_capsule_data_size": 4096, 00:18:29.743 "max_io_size": 131072, 00:18:29.743 "io_unit_size": 131072, 00:18:29.743 "max_aq_depth": 128, 00:18:29.743 "num_shared_buffers": 511, 00:18:29.743 "buf_cache_size": 4294967295, 00:18:29.743 "dif_insert_or_strip": false, 00:18:29.743 "zcopy": false, 00:18:29.743 "c2h_success": false, 00:18:29.743 "sock_priority": 0, 00:18:29.743 "abort_timeout_sec": 1, 00:18:29.743 "ack_timeout": 0, 00:18:29.743 "data_wr_pool_size": 0 00:18:29.743 } 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "method": "nvmf_create_subsystem", 00:18:29.743 "params": { 00:18:29.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.743 "allow_any_host": false, 00:18:29.743 "serial_number": "SPDK00000000000001", 00:18:29.743 "model_number": "SPDK bdev Controller", 00:18:29.743 "max_namespaces": 10, 00:18:29.743 "min_cntlid": 1, 00:18:29.743 "max_cntlid": 65519, 00:18:29.743 "ana_reporting": false 00:18:29.743 } 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "method": "nvmf_subsystem_add_host", 00:18:29.743 "params": { 00:18:29.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.743 "host": "nqn.2016-06.io.spdk:host1", 00:18:29.743 "psk": "/tmp/tmp.a5cbY8fw61" 00:18:29.743 } 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "method": "nvmf_subsystem_add_ns", 00:18:29.743 "params": { 00:18:29.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.743 "namespace": { 00:18:29.743 "nsid": 1, 00:18:29.743 "bdev_name": "malloc0", 00:18:29.743 "nguid": "93FC9ECB3F6744EDAF22FE400A2EA90A", 00:18:29.743 "uuid": "93fc9ecb-3f67-44ed-af22-fe400a2ea90a", 00:18:29.743 "no_auto_visible": false 00:18:29.743 } 00:18:29.743 } 00:18:29.743 }, 00:18:29.743 { 00:18:29.743 "method": "nvmf_subsystem_add_listener", 00:18:29.743 "params": { 00:18:29.743 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:29.743 "listen_address": { 00:18:29.743 "trtype": "TCP", 00:18:29.743 "adrfam": "IPv4", 00:18:29.743 "traddr": "10.0.0.2", 00:18:29.743 "trsvcid": "4420" 00:18:29.743 }, 00:18:29.743 "secure_channel": true 00:18:29.743 } 00:18:29.743 } 00:18:29.743 ] 00:18:29.743 } 00:18:29.743 ] 00:18:29.743 }' 00:18:29.743 20:49:54 -- nvmf/common.sh@470 -- # nvmfpid=2796775 00:18:29.743 20:49:54 -- nvmf/common.sh@471 -- # waitforlisten 2796775 00:18:29.743 20:49:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:29.743 20:49:54 -- common/autotest_common.sh@817 -- # '[' -z 2796775 ']' 00:18:29.743 20:49:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.743 20:49:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:29.743 20:49:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.743 20:49:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:29.743 20:49:54 -- common/autotest_common.sh@10 -- # set +x 00:18:29.743 [2024-04-24 20:49:54.374282] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:29.743 [2024-04-24 20:49:54.374336] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.004 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.004 [2024-04-24 20:49:54.437365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.004 [2024-04-24 20:49:54.499309] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.004 [2024-04-24 20:49:54.499344] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.004 [2024-04-24 20:49:54.499351] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.004 [2024-04-24 20:49:54.499358] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.004 [2024-04-24 20:49:54.499363] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.004 [2024-04-24 20:49:54.499422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:30.264 [2024-04-24 20:49:54.680623] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.264 [2024-04-24 20:49:54.696565] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:30.264 [2024-04-24 20:49:54.712623] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:30.264 [2024-04-24 20:49:54.721046] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:30.834 20:49:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:30.834 20:49:55 -- common/autotest_common.sh@850 -- # return 0 00:18:30.834 20:49:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:30.834 20:49:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:30.834 20:49:55 -- common/autotest_common.sh@10 -- # set +x 00:18:30.834 20:49:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.834 20:49:55 -- target/tls.sh@207 -- # bdevperf_pid=2796807 00:18:30.834 20:49:55 -- target/tls.sh@208 -- # waitforlisten 2796807 /var/tmp/bdevperf.sock 00:18:30.834 20:49:55 -- common/autotest_common.sh@817 -- # '[' -z 2796807 ']' 00:18:30.834 20:49:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.834 20:49:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:30.834 20:49:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.834 20:49:55 -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:30.834 20:49:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:30.834 20:49:55 -- common/autotest_common.sh@10 -- # set +x 00:18:30.834 20:49:55 -- target/tls.sh@204 -- # echo '{ 00:18:30.834 "subsystems": [ 00:18:30.834 { 00:18:30.834 "subsystem": "keyring", 00:18:30.834 "config": [] 00:18:30.834 }, 00:18:30.834 { 00:18:30.834 "subsystem": "iobuf", 00:18:30.834 "config": [ 00:18:30.834 { 00:18:30.834 "method": "iobuf_set_options", 00:18:30.834 "params": { 00:18:30.834 "small_pool_count": 8192, 00:18:30.834 "large_pool_count": 1024, 00:18:30.834 "small_bufsize": 8192, 00:18:30.834 "large_bufsize": 135168 00:18:30.834 } 00:18:30.834 } 00:18:30.834 ] 00:18:30.834 }, 00:18:30.834 { 00:18:30.834 "subsystem": "sock", 00:18:30.834 "config": [ 00:18:30.834 { 00:18:30.834 "method": "sock_impl_set_options", 00:18:30.834 "params": { 00:18:30.834 "impl_name": "posix", 00:18:30.834 "recv_buf_size": 2097152, 00:18:30.834 "send_buf_size": 2097152, 00:18:30.834 "enable_recv_pipe": true, 00:18:30.834 "enable_quickack": false, 00:18:30.834 "enable_placement_id": 0, 00:18:30.834 "enable_zerocopy_send_server": true, 00:18:30.834 "enable_zerocopy_send_client": false, 00:18:30.834 "zerocopy_threshold": 0, 00:18:30.834 "tls_version": 0, 00:18:30.834 "enable_ktls": false 00:18:30.834 } 00:18:30.834 }, 00:18:30.834 { 00:18:30.834 "method": "sock_impl_set_options", 00:18:30.834 "params": { 00:18:30.834 "impl_name": "ssl", 00:18:30.834 "recv_buf_size": 4096, 00:18:30.834 "send_buf_size": 4096, 00:18:30.834 "enable_recv_pipe": true, 00:18:30.834 "enable_quickack": false, 00:18:30.834 "enable_placement_id": 0, 00:18:30.834 "enable_zerocopy_send_server": true, 00:18:30.834 "enable_zerocopy_send_client": false, 00:18:30.834 "zerocopy_threshold": 0, 00:18:30.834 "tls_version": 0, 00:18:30.834 "enable_ktls": false 00:18:30.834 } 00:18:30.834 } 00:18:30.834 ] 00:18:30.834 }, 00:18:30.834 { 00:18:30.834 "subsystem": "vmd", 00:18:30.834 "config": [] 00:18:30.834 }, 00:18:30.834 { 00:18:30.834 "subsystem": "accel", 00:18:30.834 "config": [ 00:18:30.834 { 00:18:30.834 "method": "accel_set_options", 00:18:30.834 "params": { 00:18:30.834 "small_cache_size": 128, 00:18:30.834 "large_cache_size": 16, 00:18:30.834 "task_count": 2048, 00:18:30.834 "sequence_count": 2048, 00:18:30.834 "buf_count": 2048 00:18:30.834 } 00:18:30.834 } 00:18:30.834 ] 00:18:30.834 }, 00:18:30.834 { 00:18:30.834 "subsystem": "bdev", 00:18:30.834 "config": [ 00:18:30.834 { 00:18:30.834 "method": "bdev_set_options", 00:18:30.834 "params": { 00:18:30.834 "bdev_io_pool_size": 65535, 00:18:30.834 "bdev_io_cache_size": 256, 00:18:30.834 "bdev_auto_examine": true, 00:18:30.834 "iobuf_small_cache_size": 128, 00:18:30.834 "iobuf_large_cache_size": 16 00:18:30.834 } 00:18:30.834 }, 00:18:30.834 { 00:18:30.834 "method": "bdev_raid_set_options", 00:18:30.834 "params": { 00:18:30.834 "process_window_size_kb": 1024 00:18:30.834 } 00:18:30.834 }, 00:18:30.834 { 00:18:30.834 "method": "bdev_iscsi_set_options", 00:18:30.834 "params": { 00:18:30.834 "timeout_sec": 30 00:18:30.834 } 00:18:30.834 }, 00:18:30.834 { 00:18:30.834 "method": "bdev_nvme_set_options", 00:18:30.834 "params": { 00:18:30.834 "action_on_timeout": "none", 00:18:30.834 "timeout_us": 0, 00:18:30.834 "timeout_admin_us": 0, 00:18:30.834 "keep_alive_timeout_ms": 10000, 00:18:30.834 "arbitration_burst": 0, 00:18:30.834 "low_priority_weight": 0, 00:18:30.834 "medium_priority_weight": 0, 00:18:30.834 "high_priority_weight": 0, 00:18:30.834 "nvme_adminq_poll_period_us": 10000, 00:18:30.834 "nvme_ioq_poll_period_us": 0, 00:18:30.834 "io_queue_requests": 512, 00:18:30.834 "delay_cmd_submit": true, 00:18:30.834 "transport_retry_count": 4, 00:18:30.834 "bdev_retry_count": 3, 00:18:30.834 "transport_ack_timeout": 0, 00:18:30.835 "ctrlr_loss_timeout_sec": 0, 00:18:30.835 "reconnect_delay_sec": 0, 00:18:30.835 "fast_io_fail_timeout_sec": 0, 00:18:30.835 "disable_auto_failback": false, 00:18:30.835 "generate_uuids": false, 00:18:30.835 "transport_tos": 0, 00:18:30.835 "nvme_error_stat": false, 00:18:30.835 "rdma_srq_size": 0, 00:18:30.835 "io_path_stat": false, 00:18:30.835 "allow_accel_sequence": false, 00:18:30.835 "rdma_max_cq_size": 0, 00:18:30.835 "rdma_cm_event_timeout_ms": 0, 00:18:30.835 "dhchap_digests": [ 00:18:30.835 "sha256", 00:18:30.835 "sha384", 00:18:30.835 "sha512" 00:18:30.835 ], 00:18:30.835 "dhchap_dhgroups": [ 00:18:30.835 "null", 00:18:30.835 "ffdhe2048", 00:18:30.835 "ffdhe3072", 00:18:30.835 "ffdhe4096", 00:18:30.835 "ffdhe6144", 00:18:30.835 "ffdhe8192" 00:18:30.835 ] 00:18:30.835 } 00:18:30.835 }, 00:18:30.835 { 00:18:30.835 "method": "bdev_nvme_attach_controller", 00:18:30.835 "params": { 00:18:30.835 "name": "TLSTEST", 00:18:30.835 "trtype": "TCP", 00:18:30.835 "adrfam": "IPv4", 00:18:30.835 "traddr": "10.0.0.2", 00:18:30.835 "trsvcid": "4420", 00:18:30.835 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.835 "prchk_reftag": false, 00:18:30.835 "prchk_guard": false, 00:18:30.835 "ctrlr_loss_timeout_sec": 0, 00:18:30.835 "reconnect_delay_sec": 0, 00:18:30.835 "fast_io_fail_timeout_sec": 0, 00:18:30.835 "psk": "/tmp/tmp.a5cbY8fw61", 00:18:30.835 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.835 "hdgst": false, 00:18:30.835 "ddgst": false 00:18:30.835 } 00:18:30.835 }, 00:18:30.835 { 00:18:30.835 "method": "bdev_nvme_set_hotplug", 00:18:30.835 "params": { 00:18:30.835 "period_us": 100000, 00:18:30.835 "enable": false 00:18:30.835 } 00:18:30.835 }, 00:18:30.835 { 00:18:30.835 "method": "bdev_wait_for_examine" 00:18:30.835 } 00:18:30.835 ] 00:18:30.835 }, 00:18:30.835 { 00:18:30.835 "subsystem": "nbd", 00:18:30.835 "config": [] 00:18:30.835 } 00:18:30.835 ] 00:18:30.835 }' 00:18:30.835 [2024-04-24 20:49:55.309031] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:30.835 [2024-04-24 20:49:55.309081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796807 ] 00:18:30.835 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.835 [2024-04-24 20:49:55.357959] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:30.835 [2024-04-24 20:49:55.409022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.095 [2024-04-24 20:49:55.525585] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.095 [2024-04-24 20:49:55.525648] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:31.665 20:49:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:31.665 20:49:56 -- common/autotest_common.sh@850 -- # return 0 00:18:31.665 20:49:56 -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:31.665 Running I/O for 10 seconds... 00:18:41.736 00:18:41.736 Latency(us) 00:18:41.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.736 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:41.736 Verification LBA range: start 0x0 length 0x2000 00:18:41.736 TLSTESTn1 : 10.03 4168.94 16.28 0.00 0.00 30656.86 6225.92 58108.59 00:18:41.736 =================================================================================================================== 00:18:41.736 Total : 4168.94 16.28 0.00 0.00 30656.86 6225.92 58108.59 00:18:41.736 0 00:18:41.736 20:50:06 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:41.736 20:50:06 -- target/tls.sh@214 -- # killprocess 2796807 00:18:41.736 20:50:06 -- common/autotest_common.sh@936 -- # '[' -z 2796807 ']' 00:18:41.736 20:50:06 -- common/autotest_common.sh@940 -- # kill -0 2796807 00:18:41.736 20:50:06 -- common/autotest_common.sh@941 -- # uname 00:18:41.736 20:50:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:41.736 20:50:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2796807 00:18:42.026 20:50:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:42.026 20:50:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:42.026 20:50:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2796807' 00:18:42.026 killing process with pid 2796807 00:18:42.026 20:50:06 -- common/autotest_common.sh@955 -- # kill 2796807 00:18:42.026 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.026 00:18:42.026 Latency(us) 00:18:42.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.026 =================================================================================================================== 00:18:42.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.026 [2024-04-24 20:50:06.392459] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:42.026 20:50:06 -- common/autotest_common.sh@960 -- # wait 2796807 00:18:42.026 20:50:06 -- target/tls.sh@215 -- # killprocess 2796775 00:18:42.026 20:50:06 -- common/autotest_common.sh@936 -- # '[' -z 2796775 ']' 00:18:42.026 20:50:06 -- common/autotest_common.sh@940 -- # kill -0 2796775 00:18:42.026 20:50:06 -- common/autotest_common.sh@941 -- # uname 00:18:42.026 20:50:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:42.026 20:50:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2796775 00:18:42.026 20:50:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:42.026 20:50:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:42.026 20:50:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2796775' 00:18:42.026 killing process with pid 2796775 00:18:42.026 20:50:06 -- common/autotest_common.sh@955 -- # kill 2796775 00:18:42.026 [2024-04-24 20:50:06.560667] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:42.026 20:50:06 -- common/autotest_common.sh@960 -- # wait 2796775 00:18:42.286 20:50:06 -- target/tls.sh@218 -- # nvmfappstart 00:18:42.286 20:50:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:42.286 20:50:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:42.286 20:50:06 -- common/autotest_common.sh@10 -- # set +x 00:18:42.286 20:50:06 -- nvmf/common.sh@470 -- # nvmfpid=2799152 00:18:42.286 20:50:06 -- nvmf/common.sh@471 -- # waitforlisten 2799152 00:18:42.286 20:50:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:42.286 20:50:06 -- common/autotest_common.sh@817 -- # '[' -z 2799152 ']' 00:18:42.286 20:50:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.286 20:50:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:42.286 20:50:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.286 20:50:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:42.286 20:50:06 -- common/autotest_common.sh@10 -- # set +x 00:18:42.286 [2024-04-24 20:50:06.764379] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:42.286 [2024-04-24 20:50:06.764437] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.286 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.286 [2024-04-24 20:50:06.846784] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.547 [2024-04-24 20:50:06.937589] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.547 [2024-04-24 20:50:06.937647] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.547 [2024-04-24 20:50:06.937655] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.547 [2024-04-24 20:50:06.937662] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.547 [2024-04-24 20:50:06.937668] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.547 [2024-04-24 20:50:06.937705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.118 20:50:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:43.118 20:50:07 -- common/autotest_common.sh@850 -- # return 0 00:18:43.118 20:50:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:43.118 20:50:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:43.118 20:50:07 -- common/autotest_common.sh@10 -- # set +x 00:18:43.118 20:50:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.118 20:50:07 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.a5cbY8fw61 00:18:43.118 20:50:07 -- target/tls.sh@49 -- # local key=/tmp/tmp.a5cbY8fw61 00:18:43.118 20:50:07 -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:43.379 [2024-04-24 20:50:07.874208] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.379 20:50:07 -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:43.639 20:50:08 -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:43.900 [2024-04-24 20:50:08.307299] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:43.900 [2024-04-24 20:50:08.307508] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.900 20:50:08 -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:43.900 malloc0 00:18:43.900 20:50:08 -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:44.161 20:50:08 -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.a5cbY8fw61 00:18:44.423 [2024-04-24 20:50:08.939320] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:44.423 20:50:08 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:44.423 20:50:08 -- target/tls.sh@222 -- # bdevperf_pid=2799517 00:18:44.423 20:50:08 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.423 20:50:08 -- target/tls.sh@225 -- # waitforlisten 2799517 /var/tmp/bdevperf.sock 00:18:44.423 20:50:08 -- common/autotest_common.sh@817 -- # '[' -z 2799517 ']' 00:18:44.423 20:50:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.423 20:50:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:44.423 20:50:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.423 20:50:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:44.423 20:50:08 -- common/autotest_common.sh@10 -- # set +x 00:18:44.423 [2024-04-24 20:50:08.989629] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:44.423 [2024-04-24 20:50:08.989694] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2799517 ] 00:18:44.423 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.423 [2024-04-24 20:50:09.051320] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.684 [2024-04-24 20:50:09.123473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.684 20:50:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:44.684 20:50:09 -- common/autotest_common.sh@850 -- # return 0 00:18:44.684 20:50:09 -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.a5cbY8fw61 00:18:44.944 20:50:09 -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:44.944 [2024-04-24 20:50:09.581418] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.205 nvme0n1 00:18:45.205 20:50:09 -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:45.205 Running I/O for 1 seconds... 00:18:46.589 00:18:46.589 Latency(us) 00:18:46.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.589 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.589 Verification LBA range: start 0x0 length 0x2000 00:18:46.589 nvme0n1 : 1.01 3464.66 13.53 0.00 0.00 36655.43 6116.69 70341.97 00:18:46.589 =================================================================================================================== 00:18:46.589 Total : 3464.66 13.53 0.00 0.00 36655.43 6116.69 70341.97 00:18:46.589 0 00:18:46.589 20:50:10 -- target/tls.sh@234 -- # killprocess 2799517 00:18:46.589 20:50:10 -- common/autotest_common.sh@936 -- # '[' -z 2799517 ']' 00:18:46.589 20:50:10 -- common/autotest_common.sh@940 -- # kill -0 2799517 00:18:46.589 20:50:10 -- common/autotest_common.sh@941 -- # uname 00:18:46.589 20:50:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:46.589 20:50:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2799517 00:18:46.589 20:50:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:46.589 20:50:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:46.589 20:50:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2799517' 00:18:46.590 killing process with pid 2799517 00:18:46.590 20:50:10 -- common/autotest_common.sh@955 -- # kill 2799517 00:18:46.590 Received shutdown signal, test time was about 1.000000 seconds 00:18:46.590 00:18:46.590 Latency(us) 00:18:46.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.590 =================================================================================================================== 00:18:46.590 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.590 20:50:10 -- common/autotest_common.sh@960 -- # wait 2799517 00:18:46.590 20:50:11 -- target/tls.sh@235 -- # killprocess 2799152 00:18:46.590 20:50:11 -- common/autotest_common.sh@936 -- # '[' -z 2799152 ']' 00:18:46.590 20:50:11 -- common/autotest_common.sh@940 -- # kill -0 2799152 00:18:46.590 20:50:11 -- common/autotest_common.sh@941 -- # uname 00:18:46.590 20:50:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:46.590 20:50:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2799152 00:18:46.590 20:50:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:46.590 20:50:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:46.590 20:50:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2799152' 00:18:46.590 killing process with pid 2799152 00:18:46.590 20:50:11 -- common/autotest_common.sh@955 -- # kill 2799152 00:18:46.590 [2024-04-24 20:50:11.068056] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:46.590 20:50:11 -- common/autotest_common.sh@960 -- # wait 2799152 00:18:46.590 20:50:11 -- target/tls.sh@238 -- # nvmfappstart 00:18:46.590 20:50:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:46.590 20:50:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:46.590 20:50:11 -- common/autotest_common.sh@10 -- # set +x 00:18:46.590 20:50:11 -- nvmf/common.sh@470 -- # nvmfpid=2799991 00:18:46.590 20:50:11 -- nvmf/common.sh@471 -- # waitforlisten 2799991 00:18:46.590 20:50:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:46.590 20:50:11 -- common/autotest_common.sh@817 -- # '[' -z 2799991 ']' 00:18:46.590 20:50:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.590 20:50:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:46.590 20:50:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.590 20:50:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:46.590 20:50:11 -- common/autotest_common.sh@10 -- # set +x 00:18:46.850 [2024-04-24 20:50:11.269519] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:46.850 [2024-04-24 20:50:11.269574] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.850 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.850 [2024-04-24 20:50:11.352110] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.850 [2024-04-24 20:50:11.414959] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.850 [2024-04-24 20:50:11.414998] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.850 [2024-04-24 20:50:11.415006] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.850 [2024-04-24 20:50:11.415012] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.850 [2024-04-24 20:50:11.415018] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.850 [2024-04-24 20:50:11.415038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.791 20:50:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:47.791 20:50:12 -- common/autotest_common.sh@850 -- # return 0 00:18:47.791 20:50:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:47.791 20:50:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:47.791 20:50:12 -- common/autotest_common.sh@10 -- # set +x 00:18:47.791 20:50:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:47.791 20:50:12 -- target/tls.sh@239 -- # rpc_cmd 00:18:47.791 20:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:47.791 20:50:12 -- common/autotest_common.sh@10 -- # set +x 00:18:47.791 [2024-04-24 20:50:12.191480] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.791 malloc0 00:18:47.791 [2024-04-24 20:50:12.221748] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.791 [2024-04-24 20:50:12.222057] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.791 20:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:47.791 20:50:12 -- target/tls.sh@252 -- # bdevperf_pid=2800219 00:18:47.791 20:50:12 -- target/tls.sh@254 -- # waitforlisten 2800219 /var/tmp/bdevperf.sock 00:18:47.791 20:50:12 -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:47.791 20:50:12 -- common/autotest_common.sh@817 -- # '[' -z 2800219 ']' 00:18:47.791 20:50:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.791 20:50:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:47.792 20:50:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.792 20:50:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:47.792 20:50:12 -- common/autotest_common.sh@10 -- # set +x 00:18:47.792 [2024-04-24 20:50:12.302497] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:47.792 [2024-04-24 20:50:12.302558] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2800219 ] 00:18:47.792 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.792 [2024-04-24 20:50:12.364270] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.051 [2024-04-24 20:50:12.436040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.051 20:50:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:48.051 20:50:12 -- common/autotest_common.sh@850 -- # return 0 00:18:48.051 20:50:12 -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.a5cbY8fw61 00:18:48.311 20:50:12 -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:48.312 [2024-04-24 20:50:12.918084] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:48.572 nvme0n1 00:18:48.572 20:50:13 -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:48.572 Running I/O for 1 seconds... 00:18:49.511 00:18:49.511 Latency(us) 00:18:49.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.511 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:49.511 Verification LBA range: start 0x0 length 0x2000 00:18:49.511 nvme0n1 : 1.03 3057.43 11.94 0.00 0.00 41458.48 6526.29 83886.08 00:18:49.511 =================================================================================================================== 00:18:49.511 Total : 3057.43 11.94 0.00 0.00 41458.48 6526.29 83886.08 00:18:49.511 0 00:18:49.771 20:50:14 -- target/tls.sh@263 -- # rpc_cmd save_config 00:18:49.771 20:50:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:49.771 20:50:14 -- common/autotest_common.sh@10 -- # set +x 00:18:49.771 20:50:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:49.771 20:50:14 -- target/tls.sh@263 -- # tgtcfg='{ 00:18:49.771 "subsystems": [ 00:18:49.771 { 00:18:49.771 "subsystem": "keyring", 00:18:49.771 "config": [ 00:18:49.771 { 00:18:49.771 "method": "keyring_file_add_key", 00:18:49.771 "params": { 00:18:49.771 "name": "key0", 00:18:49.771 "path": "/tmp/tmp.a5cbY8fw61" 00:18:49.771 } 00:18:49.771 } 00:18:49.771 ] 00:18:49.771 }, 00:18:49.771 { 00:18:49.771 "subsystem": "iobuf", 00:18:49.771 "config": [ 00:18:49.771 { 00:18:49.771 "method": "iobuf_set_options", 00:18:49.771 "params": { 00:18:49.771 "small_pool_count": 8192, 00:18:49.771 "large_pool_count": 1024, 00:18:49.771 "small_bufsize": 8192, 00:18:49.771 "large_bufsize": 135168 00:18:49.771 } 00:18:49.771 } 00:18:49.771 ] 00:18:49.771 }, 00:18:49.771 { 00:18:49.771 "subsystem": "sock", 00:18:49.771 "config": [ 00:18:49.771 { 00:18:49.771 "method": "sock_impl_set_options", 00:18:49.771 "params": { 00:18:49.771 "impl_name": "posix", 00:18:49.771 "recv_buf_size": 2097152, 00:18:49.771 "send_buf_size": 2097152, 00:18:49.771 "enable_recv_pipe": true, 00:18:49.771 "enable_quickack": false, 00:18:49.771 "enable_placement_id": 0, 00:18:49.772 "enable_zerocopy_send_server": true, 00:18:49.772 "enable_zerocopy_send_client": false, 00:18:49.772 "zerocopy_threshold": 0, 00:18:49.772 "tls_version": 0, 00:18:49.772 "enable_ktls": false 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "sock_impl_set_options", 00:18:49.772 "params": { 00:18:49.772 "impl_name": "ssl", 00:18:49.772 "recv_buf_size": 4096, 00:18:49.772 "send_buf_size": 4096, 00:18:49.772 "enable_recv_pipe": true, 00:18:49.772 "enable_quickack": false, 00:18:49.772 "enable_placement_id": 0, 00:18:49.772 "enable_zerocopy_send_server": true, 00:18:49.772 "enable_zerocopy_send_client": false, 00:18:49.772 "zerocopy_threshold": 0, 00:18:49.772 "tls_version": 0, 00:18:49.772 "enable_ktls": false 00:18:49.772 } 00:18:49.772 } 00:18:49.772 ] 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "subsystem": "vmd", 00:18:49.772 "config": [] 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "subsystem": "accel", 00:18:49.772 "config": [ 00:18:49.772 { 00:18:49.772 "method": "accel_set_options", 00:18:49.772 "params": { 00:18:49.772 "small_cache_size": 128, 00:18:49.772 "large_cache_size": 16, 00:18:49.772 "task_count": 2048, 00:18:49.772 "sequence_count": 2048, 00:18:49.772 "buf_count": 2048 00:18:49.772 } 00:18:49.772 } 00:18:49.772 ] 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "subsystem": "bdev", 00:18:49.772 "config": [ 00:18:49.772 { 00:18:49.772 "method": "bdev_set_options", 00:18:49.772 "params": { 00:18:49.772 "bdev_io_pool_size": 65535, 00:18:49.772 "bdev_io_cache_size": 256, 00:18:49.772 "bdev_auto_examine": true, 00:18:49.772 "iobuf_small_cache_size": 128, 00:18:49.772 "iobuf_large_cache_size": 16 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "bdev_raid_set_options", 00:18:49.772 "params": { 00:18:49.772 "process_window_size_kb": 1024 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "bdev_iscsi_set_options", 00:18:49.772 "params": { 00:18:49.772 "timeout_sec": 30 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "bdev_nvme_set_options", 00:18:49.772 "params": { 00:18:49.772 "action_on_timeout": "none", 00:18:49.772 "timeout_us": 0, 00:18:49.772 "timeout_admin_us": 0, 00:18:49.772 "keep_alive_timeout_ms": 10000, 00:18:49.772 "arbitration_burst": 0, 00:18:49.772 "low_priority_weight": 0, 00:18:49.772 "medium_priority_weight": 0, 00:18:49.772 "high_priority_weight": 0, 00:18:49.772 "nvme_adminq_poll_period_us": 10000, 00:18:49.772 "nvme_ioq_poll_period_us": 0, 00:18:49.772 "io_queue_requests": 0, 00:18:49.772 "delay_cmd_submit": true, 00:18:49.772 "transport_retry_count": 4, 00:18:49.772 "bdev_retry_count": 3, 00:18:49.772 "transport_ack_timeout": 0, 00:18:49.772 "ctrlr_loss_timeout_sec": 0, 00:18:49.772 "reconnect_delay_sec": 0, 00:18:49.772 "fast_io_fail_timeout_sec": 0, 00:18:49.772 "disable_auto_failback": false, 00:18:49.772 "generate_uuids": false, 00:18:49.772 "transport_tos": 0, 00:18:49.772 "nvme_error_stat": false, 00:18:49.772 "rdma_srq_size": 0, 00:18:49.772 "io_path_stat": false, 00:18:49.772 "allow_accel_sequence": false, 00:18:49.772 "rdma_max_cq_size": 0, 00:18:49.772 "rdma_cm_event_timeout_ms": 0, 00:18:49.772 "dhchap_digests": [ 00:18:49.772 "sha256", 00:18:49.772 "sha384", 00:18:49.772 "sha512" 00:18:49.772 ], 00:18:49.772 "dhchap_dhgroups": [ 00:18:49.772 "null", 00:18:49.772 "ffdhe2048", 00:18:49.772 "ffdhe3072", 00:18:49.772 "ffdhe4096", 00:18:49.772 "ffdhe6144", 00:18:49.772 "ffdhe8192" 00:18:49.772 ] 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "bdev_nvme_set_hotplug", 00:18:49.772 "params": { 00:18:49.772 "period_us": 100000, 00:18:49.772 "enable": false 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "bdev_malloc_create", 00:18:49.772 "params": { 00:18:49.772 "name": "malloc0", 00:18:49.772 "num_blocks": 8192, 00:18:49.772 "block_size": 4096, 00:18:49.772 "physical_block_size": 4096, 00:18:49.772 "uuid": "49d301a2-fff2-48e4-ad99-a475a8c95deb", 00:18:49.772 "optimal_io_boundary": 0 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "bdev_wait_for_examine" 00:18:49.772 } 00:18:49.772 ] 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "subsystem": "nbd", 00:18:49.772 "config": [] 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "subsystem": "scheduler", 00:18:49.772 "config": [ 00:18:49.772 { 00:18:49.772 "method": "framework_set_scheduler", 00:18:49.772 "params": { 00:18:49.772 "name": "static" 00:18:49.772 } 00:18:49.772 } 00:18:49.772 ] 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "subsystem": "nvmf", 00:18:49.772 "config": [ 00:18:49.772 { 00:18:49.772 "method": "nvmf_set_config", 00:18:49.772 "params": { 00:18:49.772 "discovery_filter": "match_any", 00:18:49.772 "admin_cmd_passthru": { 00:18:49.772 "identify_ctrlr": false 00:18:49.772 } 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "nvmf_set_max_subsystems", 00:18:49.772 "params": { 00:18:49.772 "max_subsystems": 1024 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "nvmf_set_crdt", 00:18:49.772 "params": { 00:18:49.772 "crdt1": 0, 00:18:49.772 "crdt2": 0, 00:18:49.772 "crdt3": 0 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "nvmf_create_transport", 00:18:49.772 "params": { 00:18:49.772 "trtype": "TCP", 00:18:49.772 "max_queue_depth": 128, 00:18:49.772 "max_io_qpairs_per_ctrlr": 127, 00:18:49.772 "in_capsule_data_size": 4096, 00:18:49.772 "max_io_size": 131072, 00:18:49.772 "io_unit_size": 131072, 00:18:49.772 "max_aq_depth": 128, 00:18:49.772 "num_shared_buffers": 511, 00:18:49.772 "buf_cache_size": 4294967295, 00:18:49.772 "dif_insert_or_strip": false, 00:18:49.772 "zcopy": false, 00:18:49.772 "c2h_success": false, 00:18:49.772 "sock_priority": 0, 00:18:49.772 "abort_timeout_sec": 1, 00:18:49.772 "ack_timeout": 0, 00:18:49.772 "data_wr_pool_size": 0 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "nvmf_create_subsystem", 00:18:49.772 "params": { 00:18:49.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.772 "allow_any_host": false, 00:18:49.772 "serial_number": "00000000000000000000", 00:18:49.772 "model_number": "SPDK bdev Controller", 00:18:49.772 "max_namespaces": 32, 00:18:49.772 "min_cntlid": 1, 00:18:49.772 "max_cntlid": 65519, 00:18:49.772 "ana_reporting": false 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "nvmf_subsystem_add_host", 00:18:49.772 "params": { 00:18:49.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.772 "host": "nqn.2016-06.io.spdk:host1", 00:18:49.772 "psk": "key0" 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "nvmf_subsystem_add_ns", 00:18:49.772 "params": { 00:18:49.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.772 "namespace": { 00:18:49.772 "nsid": 1, 00:18:49.772 "bdev_name": "malloc0", 00:18:49.772 "nguid": "49D301A2FFF248E4AD99A475A8C95DEB", 00:18:49.772 "uuid": "49d301a2-fff2-48e4-ad99-a475a8c95deb", 00:18:49.772 "no_auto_visible": false 00:18:49.772 } 00:18:49.772 } 00:18:49.772 }, 00:18:49.772 { 00:18:49.772 "method": "nvmf_subsystem_add_listener", 00:18:49.772 "params": { 00:18:49.772 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.772 "listen_address": { 00:18:49.772 "trtype": "TCP", 00:18:49.772 "adrfam": "IPv4", 00:18:49.772 "traddr": "10.0.0.2", 00:18:49.772 "trsvcid": "4420" 00:18:49.772 }, 00:18:49.772 "secure_channel": true 00:18:49.772 } 00:18:49.772 } 00:18:49.772 ] 00:18:49.772 } 00:18:49.772 ] 00:18:49.772 }' 00:18:49.772 20:50:14 -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:50.032 20:50:14 -- target/tls.sh@264 -- # bperfcfg='{ 00:18:50.032 "subsystems": [ 00:18:50.032 { 00:18:50.032 "subsystem": "keyring", 00:18:50.032 "config": [ 00:18:50.032 { 00:18:50.032 "method": "keyring_file_add_key", 00:18:50.032 "params": { 00:18:50.032 "name": "key0", 00:18:50.033 "path": "/tmp/tmp.a5cbY8fw61" 00:18:50.033 } 00:18:50.033 } 00:18:50.033 ] 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "subsystem": "iobuf", 00:18:50.033 "config": [ 00:18:50.033 { 00:18:50.033 "method": "iobuf_set_options", 00:18:50.033 "params": { 00:18:50.033 "small_pool_count": 8192, 00:18:50.033 "large_pool_count": 1024, 00:18:50.033 "small_bufsize": 8192, 00:18:50.033 "large_bufsize": 135168 00:18:50.033 } 00:18:50.033 } 00:18:50.033 ] 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "subsystem": "sock", 00:18:50.033 "config": [ 00:18:50.033 { 00:18:50.033 "method": "sock_impl_set_options", 00:18:50.033 "params": { 00:18:50.033 "impl_name": "posix", 00:18:50.033 "recv_buf_size": 2097152, 00:18:50.033 "send_buf_size": 2097152, 00:18:50.033 "enable_recv_pipe": true, 00:18:50.033 "enable_quickack": false, 00:18:50.033 "enable_placement_id": 0, 00:18:50.033 "enable_zerocopy_send_server": true, 00:18:50.033 "enable_zerocopy_send_client": false, 00:18:50.033 "zerocopy_threshold": 0, 00:18:50.033 "tls_version": 0, 00:18:50.033 "enable_ktls": false 00:18:50.033 } 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "method": "sock_impl_set_options", 00:18:50.033 "params": { 00:18:50.033 "impl_name": "ssl", 00:18:50.033 "recv_buf_size": 4096, 00:18:50.033 "send_buf_size": 4096, 00:18:50.033 "enable_recv_pipe": true, 00:18:50.033 "enable_quickack": false, 00:18:50.033 "enable_placement_id": 0, 00:18:50.033 "enable_zerocopy_send_server": true, 00:18:50.033 "enable_zerocopy_send_client": false, 00:18:50.033 "zerocopy_threshold": 0, 00:18:50.033 "tls_version": 0, 00:18:50.033 "enable_ktls": false 00:18:50.033 } 00:18:50.033 } 00:18:50.033 ] 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "subsystem": "vmd", 00:18:50.033 "config": [] 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "subsystem": "accel", 00:18:50.033 "config": [ 00:18:50.033 { 00:18:50.033 "method": "accel_set_options", 00:18:50.033 "params": { 00:18:50.033 "small_cache_size": 128, 00:18:50.033 "large_cache_size": 16, 00:18:50.033 "task_count": 2048, 00:18:50.033 "sequence_count": 2048, 00:18:50.033 "buf_count": 2048 00:18:50.033 } 00:18:50.033 } 00:18:50.033 ] 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "subsystem": "bdev", 00:18:50.033 "config": [ 00:18:50.033 { 00:18:50.033 "method": "bdev_set_options", 00:18:50.033 "params": { 00:18:50.033 "bdev_io_pool_size": 65535, 00:18:50.033 "bdev_io_cache_size": 256, 00:18:50.033 "bdev_auto_examine": true, 00:18:50.033 "iobuf_small_cache_size": 128, 00:18:50.033 "iobuf_large_cache_size": 16 00:18:50.033 } 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "method": "bdev_raid_set_options", 00:18:50.033 "params": { 00:18:50.033 "process_window_size_kb": 1024 00:18:50.033 } 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "method": "bdev_iscsi_set_options", 00:18:50.033 "params": { 00:18:50.033 "timeout_sec": 30 00:18:50.033 } 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "method": "bdev_nvme_set_options", 00:18:50.033 "params": { 00:18:50.033 "action_on_timeout": "none", 00:18:50.033 "timeout_us": 0, 00:18:50.033 "timeout_admin_us": 0, 00:18:50.033 "keep_alive_timeout_ms": 10000, 00:18:50.033 "arbitration_burst": 0, 00:18:50.033 "low_priority_weight": 0, 00:18:50.033 "medium_priority_weight": 0, 00:18:50.033 "high_priority_weight": 0, 00:18:50.033 "nvme_adminq_poll_period_us": 10000, 00:18:50.033 "nvme_ioq_poll_period_us": 0, 00:18:50.033 "io_queue_requests": 512, 00:18:50.033 "delay_cmd_submit": true, 00:18:50.033 "transport_retry_count": 4, 00:18:50.033 "bdev_retry_count": 3, 00:18:50.033 "transport_ack_timeout": 0, 00:18:50.033 "ctrlr_loss_timeout_sec": 0, 00:18:50.033 "reconnect_delay_sec": 0, 00:18:50.033 "fast_io_fail_timeout_sec": 0, 00:18:50.033 "disable_auto_failback": false, 00:18:50.033 "generate_uuids": false, 00:18:50.033 "transport_tos": 0, 00:18:50.033 "nvme_error_stat": false, 00:18:50.033 "rdma_srq_size": 0, 00:18:50.033 "io_path_stat": false, 00:18:50.033 "allow_accel_sequence": false, 00:18:50.033 "rdma_max_cq_size": 0, 00:18:50.033 "rdma_cm_event_timeout_ms": 0, 00:18:50.033 "dhchap_digests": [ 00:18:50.033 "sha256", 00:18:50.033 "sha384", 00:18:50.033 "sha512" 00:18:50.033 ], 00:18:50.033 "dhchap_dhgroups": [ 00:18:50.033 "null", 00:18:50.033 "ffdhe2048", 00:18:50.033 "ffdhe3072", 00:18:50.033 "ffdhe4096", 00:18:50.033 "ffdhe6144", 00:18:50.033 "ffdhe8192" 00:18:50.033 ] 00:18:50.033 } 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "method": "bdev_nvme_attach_controller", 00:18:50.033 "params": { 00:18:50.033 "name": "nvme0", 00:18:50.033 "trtype": "TCP", 00:18:50.033 "adrfam": "IPv4", 00:18:50.033 "traddr": "10.0.0.2", 00:18:50.033 "trsvcid": "4420", 00:18:50.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.033 "prchk_reftag": false, 00:18:50.033 "prchk_guard": false, 00:18:50.033 "ctrlr_loss_timeout_sec": 0, 00:18:50.033 "reconnect_delay_sec": 0, 00:18:50.033 "fast_io_fail_timeout_sec": 0, 00:18:50.033 "psk": "key0", 00:18:50.033 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:50.033 "hdgst": false, 00:18:50.033 "ddgst": false 00:18:50.033 } 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "method": "bdev_nvme_set_hotplug", 00:18:50.033 "params": { 00:18:50.033 "period_us": 100000, 00:18:50.033 "enable": false 00:18:50.033 } 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "method": "bdev_enable_histogram", 00:18:50.033 "params": { 00:18:50.033 "name": "nvme0n1", 00:18:50.033 "enable": true 00:18:50.033 } 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "method": "bdev_wait_for_examine" 00:18:50.033 } 00:18:50.033 ] 00:18:50.033 }, 00:18:50.033 { 00:18:50.033 "subsystem": "nbd", 00:18:50.033 "config": [] 00:18:50.033 } 00:18:50.033 ] 00:18:50.033 }' 00:18:50.033 20:50:14 -- target/tls.sh@266 -- # killprocess 2800219 00:18:50.033 20:50:14 -- common/autotest_common.sh@936 -- # '[' -z 2800219 ']' 00:18:50.033 20:50:14 -- common/autotest_common.sh@940 -- # kill -0 2800219 00:18:50.033 20:50:14 -- common/autotest_common.sh@941 -- # uname 00:18:50.033 20:50:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:50.033 20:50:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2800219 00:18:50.033 20:50:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:50.033 20:50:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:50.033 20:50:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2800219' 00:18:50.033 killing process with pid 2800219 00:18:50.033 20:50:14 -- common/autotest_common.sh@955 -- # kill 2800219 00:18:50.033 Received shutdown signal, test time was about 1.000000 seconds 00:18:50.033 00:18:50.033 Latency(us) 00:18:50.033 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.033 =================================================================================================================== 00:18:50.033 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.033 20:50:14 -- common/autotest_common.sh@960 -- # wait 2800219 00:18:50.294 20:50:14 -- target/tls.sh@267 -- # killprocess 2799991 00:18:50.294 20:50:14 -- common/autotest_common.sh@936 -- # '[' -z 2799991 ']' 00:18:50.294 20:50:14 -- common/autotest_common.sh@940 -- # kill -0 2799991 00:18:50.294 20:50:14 -- common/autotest_common.sh@941 -- # uname 00:18:50.294 20:50:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:50.294 20:50:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2799991 00:18:50.294 20:50:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:50.294 20:50:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:50.294 20:50:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2799991' 00:18:50.294 killing process with pid 2799991 00:18:50.294 20:50:14 -- common/autotest_common.sh@955 -- # kill 2799991 00:18:50.294 20:50:14 -- common/autotest_common.sh@960 -- # wait 2799991 00:18:50.294 20:50:14 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:18:50.294 20:50:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:50.294 20:50:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:50.294 20:50:14 -- common/autotest_common.sh@10 -- # set +x 00:18:50.294 20:50:14 -- target/tls.sh@269 -- # echo '{ 00:18:50.294 "subsystems": [ 00:18:50.294 { 00:18:50.294 "subsystem": "keyring", 00:18:50.294 "config": [ 00:18:50.294 { 00:18:50.294 "method": "keyring_file_add_key", 00:18:50.294 "params": { 00:18:50.294 "name": "key0", 00:18:50.294 "path": "/tmp/tmp.a5cbY8fw61" 00:18:50.294 } 00:18:50.294 } 00:18:50.294 ] 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "subsystem": "iobuf", 00:18:50.294 "config": [ 00:18:50.294 { 00:18:50.294 "method": "iobuf_set_options", 00:18:50.294 "params": { 00:18:50.294 "small_pool_count": 8192, 00:18:50.294 "large_pool_count": 1024, 00:18:50.294 "small_bufsize": 8192, 00:18:50.294 "large_bufsize": 135168 00:18:50.294 } 00:18:50.294 } 00:18:50.294 ] 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "subsystem": "sock", 00:18:50.294 "config": [ 00:18:50.294 { 00:18:50.294 "method": "sock_impl_set_options", 00:18:50.294 "params": { 00:18:50.294 "impl_name": "posix", 00:18:50.294 "recv_buf_size": 2097152, 00:18:50.294 "send_buf_size": 2097152, 00:18:50.294 "enable_recv_pipe": true, 00:18:50.294 "enable_quickack": false, 00:18:50.294 "enable_placement_id": 0, 00:18:50.294 "enable_zerocopy_send_server": true, 00:18:50.294 "enable_zerocopy_send_client": false, 00:18:50.294 "zerocopy_threshold": 0, 00:18:50.294 "tls_version": 0, 00:18:50.294 "enable_ktls": false 00:18:50.294 } 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "method": "sock_impl_set_options", 00:18:50.294 "params": { 00:18:50.294 "impl_name": "ssl", 00:18:50.294 "recv_buf_size": 4096, 00:18:50.294 "send_buf_size": 4096, 00:18:50.294 "enable_recv_pipe": true, 00:18:50.294 "enable_quickack": false, 00:18:50.294 "enable_placement_id": 0, 00:18:50.294 "enable_zerocopy_send_server": true, 00:18:50.294 "enable_zerocopy_send_client": false, 00:18:50.294 "zerocopy_threshold": 0, 00:18:50.294 "tls_version": 0, 00:18:50.294 "enable_ktls": false 00:18:50.294 } 00:18:50.294 } 00:18:50.294 ] 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "subsystem": "vmd", 00:18:50.294 "config": [] 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "subsystem": "accel", 00:18:50.294 "config": [ 00:18:50.294 { 00:18:50.294 "method": "accel_set_options", 00:18:50.294 "params": { 00:18:50.294 "small_cache_size": 128, 00:18:50.294 "large_cache_size": 16, 00:18:50.294 "task_count": 2048, 00:18:50.294 "sequence_count": 2048, 00:18:50.294 "buf_count": 2048 00:18:50.294 } 00:18:50.294 } 00:18:50.294 ] 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "subsystem": "bdev", 00:18:50.294 "config": [ 00:18:50.294 { 00:18:50.294 "method": "bdev_set_options", 00:18:50.294 "params": { 00:18:50.294 "bdev_io_pool_size": 65535, 00:18:50.294 "bdev_io_cache_size": 256, 00:18:50.294 "bdev_auto_examine": true, 00:18:50.294 "iobuf_small_cache_size": 128, 00:18:50.294 "iobuf_large_cache_size": 16 00:18:50.294 } 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "method": "bdev_raid_set_options", 00:18:50.294 "params": { 00:18:50.294 "process_window_size_kb": 1024 00:18:50.294 } 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "method": "bdev_iscsi_set_options", 00:18:50.294 "params": { 00:18:50.294 "timeout_sec": 30 00:18:50.294 } 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "method": "bdev_nvme_set_options", 00:18:50.294 "params": { 00:18:50.294 "action_on_timeout": "none", 00:18:50.294 "timeout_us": 0, 00:18:50.294 "timeout_admin_us": 0, 00:18:50.294 "keep_alive_timeout_ms": 10000, 00:18:50.294 "arbitration_burst": 0, 00:18:50.294 "low_priority_weight": 0, 00:18:50.294 "medium_priority_weight": 0, 00:18:50.294 "high_priority_weight": 0, 00:18:50.294 "nvme_adminq_poll_period_us": 10000, 00:18:50.294 "nvme_ioq_poll_period_us": 0, 00:18:50.294 "io_queue_requests": 0, 00:18:50.294 "delay_cmd_submit": true, 00:18:50.294 "transport_retry_count": 4, 00:18:50.294 "bdev_retry_count": 3, 00:18:50.294 "transport_ack_timeout": 0, 00:18:50.294 "ctrlr_loss_timeout_sec": 0, 00:18:50.294 "reconnect_delay_sec": 0, 00:18:50.294 "fast_io_fail_timeout_sec": 0, 00:18:50.294 "disable_auto_failback": false, 00:18:50.294 "generate_uuids": false, 00:18:50.294 "transport_tos": 0, 00:18:50.294 "nvme_error_stat": false, 00:18:50.294 "rdma_srq_size": 0, 00:18:50.294 "io_path_stat": false, 00:18:50.294 "allow_accel_sequence": false, 00:18:50.294 "rdma_max_cq_size": 0, 00:18:50.294 "rdma_cm_event_timeout_ms": 0, 00:18:50.294 "dhchap_digests": [ 00:18:50.294 "sha256", 00:18:50.294 "sha384", 00:18:50.294 "sha512" 00:18:50.294 ], 00:18:50.294 "dhchap_dhgroups": [ 00:18:50.294 "null", 00:18:50.294 "ffdhe2048", 00:18:50.294 "ffdhe3072", 00:18:50.294 "ffdhe4096", 00:18:50.294 "ffdhe6144", 00:18:50.294 "ffdhe8192" 00:18:50.294 ] 00:18:50.294 } 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "method": "bdev_nvme_set_hotplug", 00:18:50.294 "params": { 00:18:50.294 "period_us": 100000, 00:18:50.294 "enable": false 00:18:50.294 } 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "method": "bdev_malloc_create", 00:18:50.294 "params": { 00:18:50.294 "name": "malloc0", 00:18:50.294 "num_blocks": 8192, 00:18:50.294 "block_size": 4096, 00:18:50.294 "physical_block_size": 4096, 00:18:50.294 "uuid": "49d301a2-fff2-48e4-ad99-a475a8c95deb", 00:18:50.294 "optimal_io_boundary": 0 00:18:50.294 } 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "method": "bdev_wait_for_examine" 00:18:50.294 } 00:18:50.294 ] 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "subsystem": "nbd", 00:18:50.294 "config": [] 00:18:50.294 }, 00:18:50.294 { 00:18:50.294 "subsystem": "scheduler", 00:18:50.294 "config": [ 00:18:50.294 { 00:18:50.294 "method": "framework_set_scheduler", 00:18:50.294 "params": { 00:18:50.294 "name": "static" 00:18:50.294 } 00:18:50.294 } 00:18:50.294 ] 00:18:50.294 }, 00:18:50.295 { 00:18:50.295 "subsystem": "nvmf", 00:18:50.295 "config": [ 00:18:50.295 { 00:18:50.295 "method": "nvmf_set_config", 00:18:50.295 "params": { 00:18:50.295 "discovery_filter": "match_any", 00:18:50.295 "admin_cmd_passthru": { 00:18:50.295 "identify_ctrlr": false 00:18:50.295 } 00:18:50.295 } 00:18:50.295 }, 00:18:50.295 { 00:18:50.295 "method": "nvmf_set_max_subsystems", 00:18:50.295 "params": { 00:18:50.295 "max_subsystems": 1024 00:18:50.295 } 00:18:50.295 }, 00:18:50.295 { 00:18:50.295 "method": "nvmf_set_crdt", 00:18:50.295 "params": { 00:18:50.295 "crdt1": 0, 00:18:50.295 "crdt2": 0, 00:18:50.295 "crdt3": 0 00:18:50.295 } 00:18:50.295 }, 00:18:50.295 { 00:18:50.295 "method": "nvmf_create_transport", 00:18:50.295 "params": { 00:18:50.295 "trtype": "TCP", 00:18:50.295 "max_queue_depth": 128, 00:18:50.295 "max_io_qpairs_per_ctrlr": 127, 00:18:50.295 "in_capsule_data_size": 4096, 00:18:50.295 "max_io_size": 131072, 00:18:50.295 "io_unit_size": 131072, 00:18:50.295 "max_aq_depth": 128, 00:18:50.295 "num_shared_buffers": 511, 00:18:50.295 "buf_cache_size": 4294967295, 00:18:50.295 "dif_insert_or_strip": false, 00:18:50.295 "zcopy": false, 00:18:50.295 "c2h_success": false, 00:18:50.295 "sock_priority": 0, 00:18:50.295 "abort_timeout_sec": 1, 00:18:50.295 "ack_timeout": 0, 00:18:50.295 "data_wr_pool_size": 0 00:18:50.295 } 00:18:50.295 }, 00:18:50.295 { 00:18:50.295 "method": "nvmf_create_subsystem", 00:18:50.295 "params": { 00:18:50.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.295 "allow_any_host": false, 00:18:50.295 "serial_number": "00000000000000000000", 00:18:50.295 "model_number": "SPDK bdev Controller", 00:18:50.295 "max_namespaces": 32, 00:18:50.295 "min_cntlid": 1, 00:18:50.295 "max_cntlid": 65519, 00:18:50.295 "ana_reporting": false 00:18:50.295 } 00:18:50.295 }, 00:18:50.295 { 00:18:50.295 "method": "nvmf_subsystem_add_host", 00:18:50.295 "params": { 00:18:50.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.295 "host": "nqn.2016-06.io.spdk:host1", 00:18:50.295 "psk": "key0" 00:18:50.295 } 00:18:50.295 }, 00:18:50.295 { 00:18:50.295 "method": "nvmf_subsystem_add_ns", 00:18:50.295 "params": { 00:18:50.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.295 "namespace": { 00:18:50.295 "nsid": 1, 00:18:50.295 "bdev_name": "malloc0", 00:18:50.295 "nguid": "49D301A2FFF248E4AD99A475A8C95DEB", 00:18:50.295 "uuid": "49d301a2-fff2-48e4-ad99-a475a8c95deb", 00:18:50.295 "no_auto_visible": false 00:18:50.295 } 00:18:50.295 } 00:18:50.295 }, 00:18:50.295 { 00:18:50.295 "method": "nvmf_subsystem_add_listener", 00:18:50.295 "params": { 00:18:50.295 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.295 "listen_address": { 00:18:50.295 "trtype": "TCP", 00:18:50.295 "adrfam": "IPv4", 00:18:50.295 "traddr": "10.0.0.2", 00:18:50.295 "trsvcid": "4420" 00:18:50.295 }, 00:18:50.295 "secure_channel": true 00:18:50.295 } 00:18:50.295 } 00:18:50.295 ] 00:18:50.295 } 00:18:50.295 ] 00:18:50.295 }' 00:18:50.555 20:50:14 -- nvmf/common.sh@470 -- # nvmfpid=2800836 00:18:50.555 20:50:14 -- nvmf/common.sh@471 -- # waitforlisten 2800836 00:18:50.555 20:50:14 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:50.555 20:50:14 -- common/autotest_common.sh@817 -- # '[' -z 2800836 ']' 00:18:50.555 20:50:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.555 20:50:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:50.555 20:50:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.555 20:50:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:50.555 20:50:14 -- common/autotest_common.sh@10 -- # set +x 00:18:50.555 [2024-04-24 20:50:14.985755] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:50.555 [2024-04-24 20:50:14.985814] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.555 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.555 [2024-04-24 20:50:15.067181] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.555 [2024-04-24 20:50:15.129380] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.555 [2024-04-24 20:50:15.129415] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.555 [2024-04-24 20:50:15.129422] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.555 [2024-04-24 20:50:15.129428] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.555 [2024-04-24 20:50:15.129437] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.555 [2024-04-24 20:50:15.129488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.815 [2024-04-24 20:50:15.318695] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.815 [2024-04-24 20:50:15.350702] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.815 [2024-04-24 20:50:15.358056] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.384 20:50:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:51.384 20:50:15 -- common/autotest_common.sh@850 -- # return 0 00:18:51.384 20:50:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:51.384 20:50:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:51.384 20:50:15 -- common/autotest_common.sh@10 -- # set +x 00:18:51.384 20:50:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.384 20:50:15 -- target/tls.sh@272 -- # bdevperf_pid=2800927 00:18:51.384 20:50:15 -- target/tls.sh@273 -- # waitforlisten 2800927 /var/tmp/bdevperf.sock 00:18:51.384 20:50:15 -- common/autotest_common.sh@817 -- # '[' -z 2800927 ']' 00:18:51.384 20:50:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.384 20:50:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:51.384 20:50:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.384 20:50:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:51.384 20:50:15 -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:51.384 20:50:15 -- common/autotest_common.sh@10 -- # set +x 00:18:51.384 20:50:15 -- target/tls.sh@270 -- # echo '{ 00:18:51.384 "subsystems": [ 00:18:51.384 { 00:18:51.384 "subsystem": "keyring", 00:18:51.384 "config": [ 00:18:51.384 { 00:18:51.384 "method": "keyring_file_add_key", 00:18:51.384 "params": { 00:18:51.384 "name": "key0", 00:18:51.384 "path": "/tmp/tmp.a5cbY8fw61" 00:18:51.384 } 00:18:51.384 } 00:18:51.384 ] 00:18:51.384 }, 00:18:51.384 { 00:18:51.384 "subsystem": "iobuf", 00:18:51.384 "config": [ 00:18:51.384 { 00:18:51.384 "method": "iobuf_set_options", 00:18:51.384 "params": { 00:18:51.384 "small_pool_count": 8192, 00:18:51.384 "large_pool_count": 1024, 00:18:51.384 "small_bufsize": 8192, 00:18:51.384 "large_bufsize": 135168 00:18:51.384 } 00:18:51.384 } 00:18:51.384 ] 00:18:51.384 }, 00:18:51.384 { 00:18:51.384 "subsystem": "sock", 00:18:51.384 "config": [ 00:18:51.384 { 00:18:51.384 "method": "sock_impl_set_options", 00:18:51.384 "params": { 00:18:51.384 "impl_name": "posix", 00:18:51.384 "recv_buf_size": 2097152, 00:18:51.384 "send_buf_size": 2097152, 00:18:51.384 "enable_recv_pipe": true, 00:18:51.384 "enable_quickack": false, 00:18:51.384 "enable_placement_id": 0, 00:18:51.384 "enable_zerocopy_send_server": true, 00:18:51.384 "enable_zerocopy_send_client": false, 00:18:51.384 "zerocopy_threshold": 0, 00:18:51.384 "tls_version": 0, 00:18:51.384 "enable_ktls": false 00:18:51.384 } 00:18:51.384 }, 00:18:51.384 { 00:18:51.384 "method": "sock_impl_set_options", 00:18:51.384 "params": { 00:18:51.384 "impl_name": "ssl", 00:18:51.384 "recv_buf_size": 4096, 00:18:51.384 "send_buf_size": 4096, 00:18:51.384 "enable_recv_pipe": true, 00:18:51.384 "enable_quickack": false, 00:18:51.384 "enable_placement_id": 0, 00:18:51.384 "enable_zerocopy_send_server": true, 00:18:51.385 "enable_zerocopy_send_client": false, 00:18:51.385 "zerocopy_threshold": 0, 00:18:51.385 "tls_version": 0, 00:18:51.385 "enable_ktls": false 00:18:51.385 } 00:18:51.385 } 00:18:51.385 ] 00:18:51.385 }, 00:18:51.385 { 00:18:51.385 "subsystem": "vmd", 00:18:51.385 "config": [] 00:18:51.385 }, 00:18:51.385 { 00:18:51.385 "subsystem": "accel", 00:18:51.385 "config": [ 00:18:51.385 { 00:18:51.385 "method": "accel_set_options", 00:18:51.385 "params": { 00:18:51.385 "small_cache_size": 128, 00:18:51.385 "large_cache_size": 16, 00:18:51.385 "task_count": 2048, 00:18:51.385 "sequence_count": 2048, 00:18:51.385 "buf_count": 2048 00:18:51.385 } 00:18:51.385 } 00:18:51.385 ] 00:18:51.385 }, 00:18:51.385 { 00:18:51.385 "subsystem": "bdev", 00:18:51.385 "config": [ 00:18:51.385 { 00:18:51.385 "method": "bdev_set_options", 00:18:51.385 "params": { 00:18:51.385 "bdev_io_pool_size": 65535, 00:18:51.385 "bdev_io_cache_size": 256, 00:18:51.385 "bdev_auto_examine": true, 00:18:51.385 "iobuf_small_cache_size": 128, 00:18:51.385 "iobuf_large_cache_size": 16 00:18:51.385 } 00:18:51.385 }, 00:18:51.385 { 00:18:51.385 "method": "bdev_raid_set_options", 00:18:51.385 "params": { 00:18:51.385 "process_window_size_kb": 1024 00:18:51.385 } 00:18:51.385 }, 00:18:51.385 { 00:18:51.385 "method": "bdev_iscsi_set_options", 00:18:51.385 "params": { 00:18:51.385 "timeout_sec": 30 00:18:51.385 } 00:18:51.385 }, 00:18:51.385 { 00:18:51.385 "method": "bdev_nvme_set_options", 00:18:51.385 "params": { 00:18:51.385 "action_on_timeout": "none", 00:18:51.385 "timeout_us": 0, 00:18:51.385 "timeout_admin_us": 0, 00:18:51.385 "keep_alive_timeout_ms": 10000, 00:18:51.385 "arbitration_burst": 0, 00:18:51.385 "low_priority_weight": 0, 00:18:51.385 "medium_priority_weight": 0, 00:18:51.385 "high_priority_weight": 0, 00:18:51.385 "nvme_adminq_poll_period_us": 10000, 00:18:51.385 "nvme_ioq_poll_period_us": 0, 00:18:51.385 "io_queue_requests": 512, 00:18:51.385 "delay_cmd_submit": true, 00:18:51.385 "transport_retry_count": 4, 00:18:51.385 "bdev_retry_count": 3, 00:18:51.385 "transport_ack_timeout": 0, 00:18:51.385 "ctrlr_loss_timeout_sec": 0, 00:18:51.385 "reconnect_delay_sec": 0, 00:18:51.385 "fast_io_fail_timeout_sec": 0, 00:18:51.385 "disable_auto_failback": false, 00:18:51.385 "generate_uuids": false, 00:18:51.385 "transport_tos": 0, 00:18:51.385 "nvme_error_stat": false, 00:18:51.385 "rdma_srq_size": 0, 00:18:51.385 "io_path_stat": false, 00:18:51.385 "allow_accel_sequence": false, 00:18:51.385 "rdma_max_cq_size": 0, 00:18:51.385 "rdma_cm_event_timeout_ms": 0, 00:18:51.385 "dhchap_digests": [ 00:18:51.385 "sha256", 00:18:51.385 "sha384", 00:18:51.385 "sha512" 00:18:51.385 ], 00:18:51.385 "dhchap_dhgroups": [ 00:18:51.385 "null", 00:18:51.385 "ffdhe2048", 00:18:51.385 "ffdhe3072", 00:18:51.385 "ffdhe4096", 00:18:51.385 "ffdhe6144", 00:18:51.385 "ffdhe8192" 00:18:51.385 ] 00:18:51.385 } 00:18:51.385 }, 00:18:51.385 { 00:18:51.385 "method": "bdev_nvme_attach_controller", 00:18:51.385 "params": { 00:18:51.385 "name": "nvme0", 00:18:51.385 "trtype": "TCP", 00:18:51.385 "adrfam": "IPv4", 00:18:51.385 "traddr": "10.0.0.2", 00:18:51.385 "trsvcid": "4420", 00:18:51.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:51.385 "prchk_reftag": false, 00:18:51.385 "prchk_guard": false, 00:18:51.385 "ctrlr_loss_timeout_sec": 0, 00:18:51.385 "reconnect_delay_sec": 0, 00:18:51.385 "fast_io_fail_timeout_sec": 0, 00:18:51.385 "psk": "key0", 00:18:51.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.385 "hdgst": false, 00:18:51.385 "ddgst": false 00:18:51.385 } 00:18:51.385 }, 00:18:51.385 { 00:18:51.385 "method": "bdev_nvme_set_hotplug", 00:18:51.385 "params": { 00:18:51.385 "period_us": 100000, 00:18:51.385 "enable": false 00:18:51.385 } 00:18:51.385 }, 00:18:51.385 { 00:18:51.385 "method": "bdev_enable_histogram", 00:18:51.385 "params": { 00:18:51.385 "name": "nvme0n1", 00:18:51.385 "enable": true 00:18:51.385 } 00:18:51.385 }, 00:18:51.385 { 00:18:51.385 "method": "bdev_wait_for_examine" 00:18:51.385 } 00:18:51.385 ] 00:18:51.385 }, 00:18:51.385 { 00:18:51.385 "subsystem": "nbd", 00:18:51.385 "config": [] 00:18:51.385 } 00:18:51.385 ] 00:18:51.385 }' 00:18:51.385 [2024-04-24 20:50:15.928905] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:18:51.385 [2024-04-24 20:50:15.928957] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2800927 ] 00:18:51.385 EAL: No free 2048 kB hugepages reported on node 1 00:18:51.385 [2024-04-24 20:50:15.987406] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.644 [2024-04-24 20:50:16.050618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.644 [2024-04-24 20:50:16.181141] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.213 20:50:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:52.213 20:50:16 -- common/autotest_common.sh@850 -- # return 0 00:18:52.213 20:50:16 -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:52.213 20:50:16 -- target/tls.sh@275 -- # jq -r '.[].name' 00:18:52.499 20:50:16 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.499 20:50:16 -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.499 Running I/O for 1 seconds... 00:18:53.879 00:18:53.879 Latency(us) 00:18:53.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.879 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:53.879 Verification LBA range: start 0x0 length 0x2000 00:18:53.879 nvme0n1 : 1.02 3824.17 14.94 0.00 0.00 33121.97 6772.05 31457.28 00:18:53.879 =================================================================================================================== 00:18:53.879 Total : 3824.17 14.94 0.00 0.00 33121.97 6772.05 31457.28 00:18:53.879 0 00:18:53.879 20:50:18 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:18:53.879 20:50:18 -- target/tls.sh@279 -- # cleanup 00:18:53.879 20:50:18 -- target/tls.sh@15 -- # process_shm --id 0 00:18:53.879 20:50:18 -- common/autotest_common.sh@794 -- # type=--id 00:18:53.879 20:50:18 -- common/autotest_common.sh@795 -- # id=0 00:18:53.879 20:50:18 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:18:53.879 20:50:18 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:53.879 20:50:18 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:18:53.879 20:50:18 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:18:53.879 20:50:18 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:18:53.879 20:50:18 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:53.879 nvmf_trace.0 00:18:53.879 20:50:18 -- common/autotest_common.sh@809 -- # return 0 00:18:53.879 20:50:18 -- target/tls.sh@16 -- # killprocess 2800927 00:18:53.879 20:50:18 -- common/autotest_common.sh@936 -- # '[' -z 2800927 ']' 00:18:53.879 20:50:18 -- common/autotest_common.sh@940 -- # kill -0 2800927 00:18:53.879 20:50:18 -- common/autotest_common.sh@941 -- # uname 00:18:53.879 20:50:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:53.879 20:50:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2800927 00:18:53.879 20:50:18 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:53.879 20:50:18 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:53.879 20:50:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2800927' 00:18:53.879 killing process with pid 2800927 00:18:53.879 20:50:18 -- common/autotest_common.sh@955 -- # kill 2800927 00:18:53.879 Received shutdown signal, test time was about 1.000000 seconds 00:18:53.879 00:18:53.879 Latency(us) 00:18:53.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.880 =================================================================================================================== 00:18:53.880 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.880 20:50:18 -- common/autotest_common.sh@960 -- # wait 2800927 00:18:53.880 20:50:18 -- target/tls.sh@17 -- # nvmftestfini 00:18:53.880 20:50:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:53.880 20:50:18 -- nvmf/common.sh@117 -- # sync 00:18:53.880 20:50:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:53.880 20:50:18 -- nvmf/common.sh@120 -- # set +e 00:18:53.880 20:50:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:53.880 20:50:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:53.880 rmmod nvme_tcp 00:18:53.880 rmmod nvme_fabrics 00:18:53.880 rmmod nvme_keyring 00:18:53.880 20:50:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:53.880 20:50:18 -- nvmf/common.sh@124 -- # set -e 00:18:53.880 20:50:18 -- nvmf/common.sh@125 -- # return 0 00:18:53.880 20:50:18 -- nvmf/common.sh@478 -- # '[' -n 2800836 ']' 00:18:53.880 20:50:18 -- nvmf/common.sh@479 -- # killprocess 2800836 00:18:53.880 20:50:18 -- common/autotest_common.sh@936 -- # '[' -z 2800836 ']' 00:18:53.880 20:50:18 -- common/autotest_common.sh@940 -- # kill -0 2800836 00:18:53.880 20:50:18 -- common/autotest_common.sh@941 -- # uname 00:18:53.880 20:50:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:53.880 20:50:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2800836 00:18:54.140 20:50:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:54.140 20:50:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:54.140 20:50:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2800836' 00:18:54.140 killing process with pid 2800836 00:18:54.140 20:50:18 -- common/autotest_common.sh@955 -- # kill 2800836 00:18:54.140 20:50:18 -- common/autotest_common.sh@960 -- # wait 2800836 00:18:54.140 20:50:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:54.140 20:50:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:54.140 20:50:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:54.140 20:50:18 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:54.140 20:50:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:54.140 20:50:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.140 20:50:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:54.140 20:50:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.721 20:50:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:56.721 20:50:20 -- target/tls.sh@18 -- # rm -f /tmp/tmp.XeyK83IiMD /tmp/tmp.6fZ2T9j3U6 /tmp/tmp.a5cbY8fw61 00:18:56.721 00:18:56.721 real 1m20.750s 00:18:56.722 user 2m4.874s 00:18:56.722 sys 0m26.313s 00:18:56.722 20:50:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:56.722 20:50:20 -- common/autotest_common.sh@10 -- # set +x 00:18:56.722 ************************************ 00:18:56.722 END TEST nvmf_tls 00:18:56.722 ************************************ 00:18:56.722 20:50:20 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:56.722 20:50:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:56.722 20:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:56.722 20:50:20 -- common/autotest_common.sh@10 -- # set +x 00:18:56.722 ************************************ 00:18:56.722 START TEST nvmf_fips 00:18:56.722 ************************************ 00:18:56.722 20:50:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:56.722 * Looking for test storage... 00:18:56.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:56.722 20:50:21 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:56.722 20:50:21 -- nvmf/common.sh@7 -- # uname -s 00:18:56.722 20:50:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:56.722 20:50:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:56.722 20:50:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:56.722 20:50:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:56.722 20:50:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:56.722 20:50:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:56.722 20:50:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:56.722 20:50:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:56.722 20:50:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:56.722 20:50:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:56.722 20:50:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:56.722 20:50:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:18:56.722 20:50:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:56.722 20:50:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:56.722 20:50:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:56.722 20:50:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:56.722 20:50:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:56.722 20:50:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:56.722 20:50:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:56.722 20:50:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:56.722 20:50:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.722 20:50:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.722 20:50:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.722 20:50:21 -- paths/export.sh@5 -- # export PATH 00:18:56.722 20:50:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:56.722 20:50:21 -- nvmf/common.sh@47 -- # : 0 00:18:56.722 20:50:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:56.722 20:50:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:56.722 20:50:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:56.722 20:50:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:56.722 20:50:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:56.722 20:50:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:56.722 20:50:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:56.722 20:50:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:56.722 20:50:21 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.722 20:50:21 -- fips/fips.sh@89 -- # check_openssl_version 00:18:56.722 20:50:21 -- fips/fips.sh@83 -- # local target=3.0.0 00:18:56.722 20:50:21 -- fips/fips.sh@85 -- # openssl version 00:18:56.722 20:50:21 -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:56.722 20:50:21 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:56.722 20:50:21 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:56.722 20:50:21 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:56.722 20:50:21 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:56.722 20:50:21 -- scripts/common.sh@333 -- # IFS=.-: 00:18:56.722 20:50:21 -- scripts/common.sh@333 -- # read -ra ver1 00:18:56.722 20:50:21 -- scripts/common.sh@334 -- # IFS=.-: 00:18:56.722 20:50:21 -- scripts/common.sh@334 -- # read -ra ver2 00:18:56.722 20:50:21 -- scripts/common.sh@335 -- # local 'op=>=' 00:18:56.722 20:50:21 -- scripts/common.sh@337 -- # ver1_l=3 00:18:56.722 20:50:21 -- scripts/common.sh@338 -- # ver2_l=3 00:18:56.722 20:50:21 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:56.722 20:50:21 -- scripts/common.sh@341 -- # case "$op" in 00:18:56.722 20:50:21 -- scripts/common.sh@345 -- # : 1 00:18:56.722 20:50:21 -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:56.722 20:50:21 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.722 20:50:21 -- scripts/common.sh@362 -- # decimal 3 00:18:56.722 20:50:21 -- scripts/common.sh@350 -- # local d=3 00:18:56.722 20:50:21 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:56.722 20:50:21 -- scripts/common.sh@352 -- # echo 3 00:18:56.722 20:50:21 -- scripts/common.sh@362 -- # ver1[v]=3 00:18:56.722 20:50:21 -- scripts/common.sh@363 -- # decimal 3 00:18:56.722 20:50:21 -- scripts/common.sh@350 -- # local d=3 00:18:56.722 20:50:21 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:56.722 20:50:21 -- scripts/common.sh@352 -- # echo 3 00:18:56.722 20:50:21 -- scripts/common.sh@363 -- # ver2[v]=3 00:18:56.722 20:50:21 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:56.722 20:50:21 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:56.722 20:50:21 -- scripts/common.sh@361 -- # (( v++ )) 00:18:56.722 20:50:21 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.722 20:50:21 -- scripts/common.sh@362 -- # decimal 0 00:18:56.722 20:50:21 -- scripts/common.sh@350 -- # local d=0 00:18:56.722 20:50:21 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:56.722 20:50:21 -- scripts/common.sh@352 -- # echo 0 00:18:56.722 20:50:21 -- scripts/common.sh@362 -- # ver1[v]=0 00:18:56.722 20:50:21 -- scripts/common.sh@363 -- # decimal 0 00:18:56.722 20:50:21 -- scripts/common.sh@350 -- # local d=0 00:18:56.722 20:50:21 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:56.722 20:50:21 -- scripts/common.sh@352 -- # echo 0 00:18:56.722 20:50:21 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:56.722 20:50:21 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:56.722 20:50:21 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:56.722 20:50:21 -- scripts/common.sh@361 -- # (( v++ )) 00:18:56.722 20:50:21 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:56.722 20:50:21 -- scripts/common.sh@362 -- # decimal 9 00:18:56.722 20:50:21 -- scripts/common.sh@350 -- # local d=9 00:18:56.722 20:50:21 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:56.722 20:50:21 -- scripts/common.sh@352 -- # echo 9 00:18:56.722 20:50:21 -- scripts/common.sh@362 -- # ver1[v]=9 00:18:56.722 20:50:21 -- scripts/common.sh@363 -- # decimal 0 00:18:56.722 20:50:21 -- scripts/common.sh@350 -- # local d=0 00:18:56.722 20:50:21 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:56.722 20:50:21 -- scripts/common.sh@352 -- # echo 0 00:18:56.722 20:50:21 -- scripts/common.sh@363 -- # ver2[v]=0 00:18:56.722 20:50:21 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:56.722 20:50:21 -- scripts/common.sh@364 -- # return 0 00:18:56.722 20:50:21 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:56.722 20:50:21 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:56.722 20:50:21 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:56.722 20:50:21 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:56.722 20:50:21 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:56.722 20:50:21 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:56.722 20:50:21 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:56.722 20:50:21 -- fips/fips.sh@113 -- # build_openssl_config 00:18:56.722 20:50:21 -- fips/fips.sh@37 -- # cat 00:18:56.722 20:50:21 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:56.722 20:50:21 -- fips/fips.sh@58 -- # cat - 00:18:56.722 20:50:21 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:56.722 20:50:21 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:56.722 20:50:21 -- fips/fips.sh@116 -- # mapfile -t providers 00:18:56.722 20:50:21 -- fips/fips.sh@116 -- # openssl list -providers 00:18:56.722 20:50:21 -- fips/fips.sh@116 -- # grep name 00:18:56.722 20:50:21 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:56.722 20:50:21 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:56.722 20:50:21 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:56.722 20:50:21 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:56.723 20:50:21 -- common/autotest_common.sh@638 -- # local es=0 00:18:56.723 20:50:21 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:56.723 20:50:21 -- fips/fips.sh@127 -- # : 00:18:56.723 20:50:21 -- common/autotest_common.sh@626 -- # local arg=openssl 00:18:56.723 20:50:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:56.723 20:50:21 -- common/autotest_common.sh@630 -- # type -t openssl 00:18:56.723 20:50:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:56.723 20:50:21 -- common/autotest_common.sh@632 -- # type -P openssl 00:18:56.723 20:50:21 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:56.723 20:50:21 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:18:56.723 20:50:21 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:18:56.723 20:50:21 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:18:56.723 Error setting digest 00:18:56.723 0062EABFF37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:56.723 0062EABFF37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:56.723 20:50:21 -- common/autotest_common.sh@641 -- # es=1 00:18:56.723 20:50:21 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:56.723 20:50:21 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:56.723 20:50:21 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:56.723 20:50:21 -- fips/fips.sh@130 -- # nvmftestinit 00:18:56.723 20:50:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:56.723 20:50:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:56.723 20:50:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:56.723 20:50:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:56.723 20:50:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:56.723 20:50:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.723 20:50:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.723 20:50:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.723 20:50:21 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:18:56.723 20:50:21 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:56.723 20:50:21 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:56.723 20:50:21 -- common/autotest_common.sh@10 -- # set +x 00:19:04.874 20:50:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:04.874 20:50:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:04.874 20:50:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:04.874 20:50:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:04.874 20:50:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:04.874 20:50:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:04.874 20:50:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:04.874 20:50:28 -- nvmf/common.sh@295 -- # net_devs=() 00:19:04.874 20:50:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:04.874 20:50:28 -- nvmf/common.sh@296 -- # e810=() 00:19:04.874 20:50:28 -- nvmf/common.sh@296 -- # local -ga e810 00:19:04.874 20:50:28 -- nvmf/common.sh@297 -- # x722=() 00:19:04.874 20:50:28 -- nvmf/common.sh@297 -- # local -ga x722 00:19:04.874 20:50:28 -- nvmf/common.sh@298 -- # mlx=() 00:19:04.874 20:50:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:04.874 20:50:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.874 20:50:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.874 20:50:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.874 20:50:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.874 20:50:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.874 20:50:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.874 20:50:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.874 20:50:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.874 20:50:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.874 20:50:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.874 20:50:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.874 20:50:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:04.874 20:50:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:04.874 20:50:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:04.874 20:50:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.874 20:50:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:04.874 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:04.874 20:50:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:04.874 20:50:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:04.874 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:04.874 20:50:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:04.874 20:50:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.874 20:50:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.874 20:50:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:04.874 20:50:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.874 20:50:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:04.874 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:04.874 20:50:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.874 20:50:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:04.874 20:50:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.874 20:50:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:04.874 20:50:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.874 20:50:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:04.874 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:04.874 20:50:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.874 20:50:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:04.874 20:50:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:04.874 20:50:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:04.874 20:50:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:04.874 20:50:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:04.874 20:50:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.874 20:50:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:04.874 20:50:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:04.874 20:50:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:04.874 20:50:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:04.874 20:50:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:04.874 20:50:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:04.874 20:50:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.874 20:50:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:04.874 20:50:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:04.874 20:50:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:04.874 20:50:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:04.874 20:50:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:04.874 20:50:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:04.874 20:50:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:04.874 20:50:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:04.874 20:50:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:04.874 20:50:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:04.874 20:50:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:04.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.599 ms 00:19:04.874 00:19:04.875 --- 10.0.0.2 ping statistics --- 00:19:04.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.875 rtt min/avg/max/mdev = 0.599/0.599/0.599/0.000 ms 00:19:04.875 20:50:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:04.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:19:04.875 00:19:04.875 --- 10.0.0.1 ping statistics --- 00:19:04.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.875 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:19:04.875 20:50:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.875 20:50:28 -- nvmf/common.sh@411 -- # return 0 00:19:04.875 20:50:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:04.875 20:50:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.875 20:50:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:04.875 20:50:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:04.875 20:50:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.875 20:50:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:04.875 20:50:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:04.875 20:50:28 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:19:04.875 20:50:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:04.875 20:50:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:04.875 20:50:28 -- common/autotest_common.sh@10 -- # set +x 00:19:04.875 20:50:28 -- nvmf/common.sh@470 -- # nvmfpid=2805704 00:19:04.875 20:50:28 -- nvmf/common.sh@471 -- # waitforlisten 2805704 00:19:04.875 20:50:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:04.875 20:50:28 -- common/autotest_common.sh@817 -- # '[' -z 2805704 ']' 00:19:04.875 20:50:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.875 20:50:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:04.875 20:50:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.875 20:50:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:04.875 20:50:28 -- common/autotest_common.sh@10 -- # set +x 00:19:04.875 [2024-04-24 20:50:28.609116] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:19:04.875 [2024-04-24 20:50:28.609189] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.875 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.875 [2024-04-24 20:50:28.678886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.875 [2024-04-24 20:50:28.750652] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.875 [2024-04-24 20:50:28.750687] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.875 [2024-04-24 20:50:28.750694] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.875 [2024-04-24 20:50:28.750701] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.875 [2024-04-24 20:50:28.750706] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.875 [2024-04-24 20:50:28.750740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.875 20:50:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:04.875 20:50:29 -- common/autotest_common.sh@850 -- # return 0 00:19:04.875 20:50:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:04.875 20:50:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:04.875 20:50:29 -- common/autotest_common.sh@10 -- # set +x 00:19:04.875 20:50:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.875 20:50:29 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:19:04.875 20:50:29 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:04.875 20:50:29 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.875 20:50:29 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:04.875 20:50:29 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.875 20:50:29 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.875 20:50:29 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:04.875 20:50:29 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:05.135 [2024-04-24 20:50:29.686041] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.135 [2024-04-24 20:50:29.702035] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:05.135 [2024-04-24 20:50:29.702199] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:05.136 [2024-04-24 20:50:29.728762] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:05.136 malloc0 00:19:05.136 20:50:29 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.136 20:50:29 -- fips/fips.sh@147 -- # bdevperf_pid=2805983 00:19:05.136 20:50:29 -- fips/fips.sh@148 -- # waitforlisten 2805983 /var/tmp/bdevperf.sock 00:19:05.136 20:50:29 -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.136 20:50:29 -- common/autotest_common.sh@817 -- # '[' -z 2805983 ']' 00:19:05.136 20:50:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.136 20:50:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:05.136 20:50:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.136 20:50:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:05.136 20:50:29 -- common/autotest_common.sh@10 -- # set +x 00:19:05.396 [2024-04-24 20:50:29.811006] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:19:05.396 [2024-04-24 20:50:29.811058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805983 ] 00:19:05.396 EAL: No free 2048 kB hugepages reported on node 1 00:19:05.396 [2024-04-24 20:50:29.861171] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.396 [2024-04-24 20:50:29.912257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.396 20:50:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:05.396 20:50:29 -- common/autotest_common.sh@850 -- # return 0 00:19:05.397 20:50:29 -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:05.658 [2024-04-24 20:50:30.179878] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.658 [2024-04-24 20:50:30.179963] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:05.658 TLSTESTn1 00:19:05.658 20:50:30 -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:05.918 Running I/O for 10 seconds... 00:19:15.912 00:19:15.912 Latency(us) 00:19:15.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.912 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:15.912 Verification LBA range: start 0x0 length 0x2000 00:19:15.912 TLSTESTn1 : 10.02 4269.25 16.68 0.00 0.00 29945.89 6335.15 59419.31 00:19:15.912 =================================================================================================================== 00:19:15.912 Total : 4269.25 16.68 0.00 0.00 29945.89 6335.15 59419.31 00:19:15.912 0 00:19:15.912 20:50:40 -- fips/fips.sh@1 -- # cleanup 00:19:15.912 20:50:40 -- fips/fips.sh@15 -- # process_shm --id 0 00:19:15.912 20:50:40 -- common/autotest_common.sh@794 -- # type=--id 00:19:15.912 20:50:40 -- common/autotest_common.sh@795 -- # id=0 00:19:15.912 20:50:40 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:19:15.912 20:50:40 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:15.912 20:50:40 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:19:15.912 20:50:40 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:19:15.912 20:50:40 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:19:15.912 20:50:40 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:15.912 nvmf_trace.0 00:19:15.912 20:50:40 -- common/autotest_common.sh@809 -- # return 0 00:19:15.912 20:50:40 -- fips/fips.sh@16 -- # killprocess 2805983 00:19:15.912 20:50:40 -- common/autotest_common.sh@936 -- # '[' -z 2805983 ']' 00:19:15.912 20:50:40 -- common/autotest_common.sh@940 -- # kill -0 2805983 00:19:15.912 20:50:40 -- common/autotest_common.sh@941 -- # uname 00:19:15.912 20:50:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:15.912 20:50:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2805983 00:19:16.173 20:50:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:16.173 20:50:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:16.173 20:50:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2805983' 00:19:16.173 killing process with pid 2805983 00:19:16.173 20:50:40 -- common/autotest_common.sh@955 -- # kill 2805983 00:19:16.173 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.173 00:19:16.173 Latency(us) 00:19:16.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.173 =================================================================================================================== 00:19:16.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:16.173 [2024-04-24 20:50:40.582188] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:16.173 20:50:40 -- common/autotest_common.sh@960 -- # wait 2805983 00:19:16.173 20:50:40 -- fips/fips.sh@17 -- # nvmftestfini 00:19:16.173 20:50:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:16.173 20:50:40 -- nvmf/common.sh@117 -- # sync 00:19:16.173 20:50:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:16.173 20:50:40 -- nvmf/common.sh@120 -- # set +e 00:19:16.173 20:50:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:16.173 20:50:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:16.173 rmmod nvme_tcp 00:19:16.173 rmmod nvme_fabrics 00:19:16.173 rmmod nvme_keyring 00:19:16.173 20:50:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:16.173 20:50:40 -- nvmf/common.sh@124 -- # set -e 00:19:16.173 20:50:40 -- nvmf/common.sh@125 -- # return 0 00:19:16.173 20:50:40 -- nvmf/common.sh@478 -- # '[' -n 2805704 ']' 00:19:16.173 20:50:40 -- nvmf/common.sh@479 -- # killprocess 2805704 00:19:16.173 20:50:40 -- common/autotest_common.sh@936 -- # '[' -z 2805704 ']' 00:19:16.173 20:50:40 -- common/autotest_common.sh@940 -- # kill -0 2805704 00:19:16.173 20:50:40 -- common/autotest_common.sh@941 -- # uname 00:19:16.173 20:50:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:16.173 20:50:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2805704 00:19:16.433 20:50:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:16.433 20:50:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:16.433 20:50:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2805704' 00:19:16.433 killing process with pid 2805704 00:19:16.433 20:50:40 -- common/autotest_common.sh@955 -- # kill 2805704 00:19:16.433 [2024-04-24 20:50:40.831037] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:16.433 20:50:40 -- common/autotest_common.sh@960 -- # wait 2805704 00:19:16.433 20:50:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:16.433 20:50:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:16.433 20:50:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:16.433 20:50:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.433 20:50:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:16.434 20:50:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.434 20:50:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.434 20:50:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.977 20:50:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:18.977 20:50:43 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:18.977 00:19:18.977 real 0m22.110s 00:19:18.977 user 0m22.620s 00:19:18.977 sys 0m9.696s 00:19:18.977 20:50:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:18.977 20:50:43 -- common/autotest_common.sh@10 -- # set +x 00:19:18.977 ************************************ 00:19:18.977 END TEST nvmf_fips 00:19:18.977 ************************************ 00:19:18.977 20:50:43 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:19:18.977 20:50:43 -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:19:18.977 20:50:43 -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:19:18.977 20:50:43 -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:19:18.977 20:50:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:18.977 20:50:43 -- common/autotest_common.sh@10 -- # set +x 00:19:25.565 20:50:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:25.565 20:50:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:25.565 20:50:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:25.565 20:50:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:25.565 20:50:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:25.565 20:50:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:25.565 20:50:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:25.565 20:50:49 -- nvmf/common.sh@295 -- # net_devs=() 00:19:25.565 20:50:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:25.565 20:50:49 -- nvmf/common.sh@296 -- # e810=() 00:19:25.565 20:50:49 -- nvmf/common.sh@296 -- # local -ga e810 00:19:25.565 20:50:49 -- nvmf/common.sh@297 -- # x722=() 00:19:25.565 20:50:49 -- nvmf/common.sh@297 -- # local -ga x722 00:19:25.565 20:50:49 -- nvmf/common.sh@298 -- # mlx=() 00:19:25.565 20:50:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:25.565 20:50:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:25.565 20:50:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:25.565 20:50:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:25.565 20:50:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:25.565 20:50:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:25.565 20:50:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:25.565 20:50:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:25.565 20:50:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:25.565 20:50:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:25.565 20:50:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:25.565 20:50:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:25.565 20:50:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:25.565 20:50:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:25.565 20:50:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:25.565 20:50:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:25.565 20:50:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:25.565 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:25.565 20:50:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:25.565 20:50:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:25.565 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:25.565 20:50:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:25.565 20:50:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:25.565 20:50:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:25.565 20:50:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.565 20:50:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:25.565 20:50:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.565 20:50:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:25.565 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:25.565 20:50:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.565 20:50:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:25.565 20:50:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:25.565 20:50:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:25.565 20:50:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:25.565 20:50:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:25.565 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:25.565 20:50:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:25.565 20:50:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:25.565 20:50:49 -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:25.565 20:50:49 -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:19:25.565 20:50:49 -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:25.565 20:50:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:25.565 20:50:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:25.565 20:50:49 -- common/autotest_common.sh@10 -- # set +x 00:19:25.565 ************************************ 00:19:25.565 START TEST nvmf_perf_adq 00:19:25.565 ************************************ 00:19:25.565 20:50:49 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:25.565 * Looking for test storage... 00:19:25.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.565 20:50:50 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.565 20:50:50 -- nvmf/common.sh@7 -- # uname -s 00:19:25.565 20:50:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.565 20:50:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.565 20:50:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.565 20:50:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.565 20:50:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.565 20:50:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.565 20:50:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.565 20:50:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.565 20:50:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.565 20:50:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.565 20:50:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:25.565 20:50:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:19:25.565 20:50:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.565 20:50:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.565 20:50:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.565 20:50:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:25.565 20:50:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.565 20:50:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.565 20:50:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.565 20:50:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.565 20:50:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.565 20:50:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.565 20:50:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.565 20:50:50 -- paths/export.sh@5 -- # export PATH 00:19:25.565 20:50:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.565 20:50:50 -- nvmf/common.sh@47 -- # : 0 00:19:25.565 20:50:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:25.565 20:50:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:25.565 20:50:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:25.565 20:50:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.565 20:50:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.565 20:50:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:25.565 20:50:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:25.565 20:50:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:25.565 20:50:50 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:25.566 20:50:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:25.566 20:50:50 -- common/autotest_common.sh@10 -- # set +x 00:19:32.206 20:50:56 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:32.206 20:50:56 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:32.206 20:50:56 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:32.206 20:50:56 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:32.206 20:50:56 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:32.206 20:50:56 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:32.206 20:50:56 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:32.206 20:50:56 -- nvmf/common.sh@295 -- # net_devs=() 00:19:32.206 20:50:56 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:32.206 20:50:56 -- nvmf/common.sh@296 -- # e810=() 00:19:32.206 20:50:56 -- nvmf/common.sh@296 -- # local -ga e810 00:19:32.206 20:50:56 -- nvmf/common.sh@297 -- # x722=() 00:19:32.206 20:50:56 -- nvmf/common.sh@297 -- # local -ga x722 00:19:32.206 20:50:56 -- nvmf/common.sh@298 -- # mlx=() 00:19:32.206 20:50:56 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:32.206 20:50:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:32.206 20:50:56 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:32.206 20:50:56 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:32.206 20:50:56 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:32.206 20:50:56 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:32.206 20:50:56 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:32.206 20:50:56 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:32.206 20:50:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:32.206 20:50:56 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:32.206 20:50:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:32.206 20:50:56 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:32.206 20:50:56 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:32.206 20:50:56 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:32.206 20:50:56 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:32.206 20:50:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.206 20:50:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:32.206 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:32.206 20:50:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:32.206 20:50:56 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:32.206 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:32.206 20:50:56 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:32.206 20:50:56 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:32.206 20:50:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.206 20:50:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.206 20:50:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:32.206 20:50:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.206 20:50:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:32.206 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:32.206 20:50:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.206 20:50:56 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:32.206 20:50:56 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:32.206 20:50:56 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:32.206 20:50:56 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:32.206 20:50:56 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:32.207 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:32.207 20:50:56 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:32.207 20:50:56 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:32.207 20:50:56 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:32.207 20:50:56 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:32.207 20:50:56 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:32.207 20:50:56 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:19:32.207 20:50:56 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:34.121 20:50:58 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:36.037 20:51:00 -- target/perf_adq.sh@54 -- # sleep 5 00:19:41.323 20:51:05 -- target/perf_adq.sh@67 -- # nvmftestinit 00:19:41.323 20:51:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:41.323 20:51:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.323 20:51:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:41.323 20:51:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:41.323 20:51:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:41.323 20:51:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.323 20:51:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.323 20:51:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.323 20:51:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:19:41.323 20:51:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:19:41.323 20:51:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:41.323 20:51:05 -- common/autotest_common.sh@10 -- # set +x 00:19:41.323 20:51:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:41.323 20:51:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:41.323 20:51:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:41.323 20:51:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:41.323 20:51:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:41.323 20:51:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:41.323 20:51:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:41.323 20:51:05 -- nvmf/common.sh@295 -- # net_devs=() 00:19:41.323 20:51:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:41.323 20:51:05 -- nvmf/common.sh@296 -- # e810=() 00:19:41.324 20:51:05 -- nvmf/common.sh@296 -- # local -ga e810 00:19:41.324 20:51:05 -- nvmf/common.sh@297 -- # x722=() 00:19:41.324 20:51:05 -- nvmf/common.sh@297 -- # local -ga x722 00:19:41.324 20:51:05 -- nvmf/common.sh@298 -- # mlx=() 00:19:41.324 20:51:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:41.324 20:51:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:41.324 20:51:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:41.324 20:51:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:41.324 20:51:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:41.324 20:51:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:41.324 20:51:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:41.324 20:51:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:41.324 20:51:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:41.324 20:51:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:41.324 20:51:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:41.324 20:51:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:41.324 20:51:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:41.324 20:51:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:41.324 20:51:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:41.324 20:51:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:41.324 20:51:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:41.324 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:41.324 20:51:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:41.324 20:51:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:41.324 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:41.324 20:51:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:41.324 20:51:05 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.324 20:51:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.324 20:51:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:41.324 20:51:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.324 20:51:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:41.324 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:41.324 20:51:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.324 20:51:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:41.324 20:51:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:41.324 20:51:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:19:41.324 20:51:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:41.324 20:51:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:41.324 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:41.324 20:51:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:19:41.324 20:51:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:19:41.324 20:51:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:19:41.324 20:51:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:19:41.324 20:51:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.324 20:51:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.324 20:51:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:41.324 20:51:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:41.324 20:51:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:41.324 20:51:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:41.324 20:51:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:41.324 20:51:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:41.324 20:51:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.324 20:51:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:41.324 20:51:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:41.324 20:51:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:41.324 20:51:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:41.324 20:51:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:41.324 20:51:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:41.324 20:51:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:41.324 20:51:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:41.324 20:51:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:41.324 20:51:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:41.324 20:51:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:41.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.604 ms 00:19:41.324 00:19:41.324 --- 10.0.0.2 ping statistics --- 00:19:41.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.324 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:19:41.324 20:51:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:41.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:19:41.324 00:19:41.324 --- 10.0.0.1 ping statistics --- 00:19:41.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.324 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:19:41.324 20:51:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.324 20:51:05 -- nvmf/common.sh@411 -- # return 0 00:19:41.324 20:51:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:41.324 20:51:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.324 20:51:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:41.324 20:51:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.324 20:51:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:41.324 20:51:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:41.324 20:51:05 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:41.324 20:51:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:41.324 20:51:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:41.324 20:51:05 -- common/autotest_common.sh@10 -- # set +x 00:19:41.324 20:51:05 -- nvmf/common.sh@470 -- # nvmfpid=2817767 00:19:41.324 20:51:05 -- nvmf/common.sh@471 -- # waitforlisten 2817767 00:19:41.324 20:51:05 -- common/autotest_common.sh@817 -- # '[' -z 2817767 ']' 00:19:41.324 20:51:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:41.324 20:51:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.324 20:51:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:41.324 20:51:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.324 20:51:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:41.324 20:51:05 -- common/autotest_common.sh@10 -- # set +x 00:19:41.324 [2024-04-24 20:51:05.753338] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:19:41.324 [2024-04-24 20:51:05.753402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.324 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.324 [2024-04-24 20:51:05.841650] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:41.324 [2024-04-24 20:51:05.937027] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.324 [2024-04-24 20:51:05.937088] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.324 [2024-04-24 20:51:05.937096] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.324 [2024-04-24 20:51:05.937102] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.324 [2024-04-24 20:51:05.937108] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.324 [2024-04-24 20:51:05.937243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.324 [2024-04-24 20:51:05.937377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.324 [2024-04-24 20:51:05.937543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.324 [2024-04-24 20:51:05.937543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:42.267 20:51:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:42.267 20:51:06 -- common/autotest_common.sh@850 -- # return 0 00:19:42.267 20:51:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:42.267 20:51:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:42.267 20:51:06 -- common/autotest_common.sh@10 -- # set +x 00:19:42.267 20:51:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:42.267 20:51:06 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:19:42.267 20:51:06 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:42.267 20:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.267 20:51:06 -- common/autotest_common.sh@10 -- # set +x 00:19:42.267 20:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.267 20:51:06 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:19:42.267 20:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.267 20:51:06 -- common/autotest_common.sh@10 -- # set +x 00:19:42.267 20:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.267 20:51:06 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:42.267 20:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.267 20:51:06 -- common/autotest_common.sh@10 -- # set +x 00:19:42.267 [2024-04-24 20:51:06.766668] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:42.267 20:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.267 20:51:06 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:42.267 20:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.268 20:51:06 -- common/autotest_common.sh@10 -- # set +x 00:19:42.268 Malloc1 00:19:42.268 20:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.268 20:51:06 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:42.268 20:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.268 20:51:06 -- common/autotest_common.sh@10 -- # set +x 00:19:42.268 20:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.268 20:51:06 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:42.268 20:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.268 20:51:06 -- common/autotest_common.sh@10 -- # set +x 00:19:42.268 20:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.268 20:51:06 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.268 20:51:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.268 20:51:06 -- common/autotest_common.sh@10 -- # set +x 00:19:42.268 [2024-04-24 20:51:06.825999] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.268 20:51:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.268 20:51:06 -- target/perf_adq.sh@73 -- # perfpid=2818012 00:19:42.268 20:51:06 -- target/perf_adq.sh@74 -- # sleep 2 00:19:42.268 20:51:06 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:42.268 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.814 20:51:08 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:19:44.814 20:51:08 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:44.814 20:51:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.814 20:51:08 -- target/perf_adq.sh@76 -- # wc -l 00:19:44.814 20:51:08 -- common/autotest_common.sh@10 -- # set +x 00:19:44.814 20:51:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.814 20:51:08 -- target/perf_adq.sh@76 -- # count=4 00:19:44.814 20:51:08 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:19:44.814 20:51:08 -- target/perf_adq.sh@81 -- # wait 2818012 00:19:52.951 Initializing NVMe Controllers 00:19:52.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:52.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:52.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:52.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:52.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:52.951 Initialization complete. Launching workers. 00:19:52.951 ======================================================== 00:19:52.951 Latency(us) 00:19:52.951 Device Information : IOPS MiB/s Average min max 00:19:52.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10319.30 40.31 6204.31 1877.77 9895.42 00:19:52.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14453.00 56.46 4435.68 1179.20 43526.88 00:19:52.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10282.30 40.17 6224.95 1601.95 11321.81 00:19:52.951 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10104.10 39.47 6335.27 1561.90 13419.29 00:19:52.951 ======================================================== 00:19:52.951 Total : 45158.69 176.40 5672.26 1179.20 43526.88 00:19:52.951 00:19:52.951 20:51:17 -- target/perf_adq.sh@82 -- # nvmftestfini 00:19:52.951 20:51:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:52.951 20:51:17 -- nvmf/common.sh@117 -- # sync 00:19:52.951 20:51:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.951 20:51:17 -- nvmf/common.sh@120 -- # set +e 00:19:52.951 20:51:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.951 20:51:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.951 rmmod nvme_tcp 00:19:52.951 rmmod nvme_fabrics 00:19:52.951 rmmod nvme_keyring 00:19:52.951 20:51:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.951 20:51:17 -- nvmf/common.sh@124 -- # set -e 00:19:52.951 20:51:17 -- nvmf/common.sh@125 -- # return 0 00:19:52.951 20:51:17 -- nvmf/common.sh@478 -- # '[' -n 2817767 ']' 00:19:52.951 20:51:17 -- nvmf/common.sh@479 -- # killprocess 2817767 00:19:52.951 20:51:17 -- common/autotest_common.sh@936 -- # '[' -z 2817767 ']' 00:19:52.951 20:51:17 -- common/autotest_common.sh@940 -- # kill -0 2817767 00:19:52.951 20:51:17 -- common/autotest_common.sh@941 -- # uname 00:19:52.951 20:51:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.951 20:51:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2817767 00:19:52.951 20:51:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:52.951 20:51:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:52.952 20:51:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2817767' 00:19:52.952 killing process with pid 2817767 00:19:52.952 20:51:17 -- common/autotest_common.sh@955 -- # kill 2817767 00:19:52.952 20:51:17 -- common/autotest_common.sh@960 -- # wait 2817767 00:19:52.952 20:51:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:52.952 20:51:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:52.952 20:51:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:52.952 20:51:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:52.952 20:51:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:52.952 20:51:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:52.952 20:51:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:52.952 20:51:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.864 20:51:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:54.864 20:51:19 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:19:54.864 20:51:19 -- target/perf_adq.sh@52 -- # rmmod ice 00:19:56.778 20:51:20 -- target/perf_adq.sh@53 -- # modprobe ice 00:19:58.756 20:51:23 -- target/perf_adq.sh@54 -- # sleep 5 00:20:04.045 20:51:28 -- target/perf_adq.sh@87 -- # nvmftestinit 00:20:04.045 20:51:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:04.045 20:51:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.045 20:51:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:04.045 20:51:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:04.045 20:51:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:04.045 20:51:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.045 20:51:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.045 20:51:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.045 20:51:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:04.045 20:51:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:04.045 20:51:28 -- common/autotest_common.sh@10 -- # set +x 00:20:04.045 20:51:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:04.045 20:51:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:04.045 20:51:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:04.045 20:51:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:04.045 20:51:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:04.045 20:51:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:04.045 20:51:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:04.045 20:51:28 -- nvmf/common.sh@295 -- # net_devs=() 00:20:04.045 20:51:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:04.045 20:51:28 -- nvmf/common.sh@296 -- # e810=() 00:20:04.045 20:51:28 -- nvmf/common.sh@296 -- # local -ga e810 00:20:04.045 20:51:28 -- nvmf/common.sh@297 -- # x722=() 00:20:04.045 20:51:28 -- nvmf/common.sh@297 -- # local -ga x722 00:20:04.045 20:51:28 -- nvmf/common.sh@298 -- # mlx=() 00:20:04.045 20:51:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:04.045 20:51:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.045 20:51:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.045 20:51:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.045 20:51:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.045 20:51:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.045 20:51:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.045 20:51:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.045 20:51:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.045 20:51:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.045 20:51:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.045 20:51:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.045 20:51:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:04.045 20:51:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:04.045 20:51:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:04.045 20:51:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.045 20:51:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:04.045 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:04.045 20:51:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.045 20:51:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:04.045 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:04.045 20:51:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:04.045 20:51:28 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.045 20:51:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.045 20:51:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:04.045 20:51:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.045 20:51:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:04.045 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:04.045 20:51:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.045 20:51:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.045 20:51:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.045 20:51:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:04.045 20:51:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.045 20:51:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:04.045 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:04.045 20:51:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.045 20:51:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:04.045 20:51:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:04.045 20:51:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:04.045 20:51:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:04.045 20:51:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.045 20:51:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.045 20:51:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.045 20:51:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:04.045 20:51:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:04.045 20:51:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:04.045 20:51:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:04.045 20:51:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:04.045 20:51:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.045 20:51:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:04.045 20:51:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:04.045 20:51:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:04.045 20:51:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:04.045 20:51:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:04.046 20:51:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:04.046 20:51:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:04.046 20:51:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:04.046 20:51:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:04.046 20:51:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:04.046 20:51:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:04.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:20:04.046 00:20:04.046 --- 10.0.0.2 ping statistics --- 00:20:04.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.046 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:20:04.046 20:51:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:04.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:20:04.046 00:20:04.046 --- 10.0.0.1 ping statistics --- 00:20:04.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.046 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:20:04.046 20:51:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.046 20:51:28 -- nvmf/common.sh@411 -- # return 0 00:20:04.046 20:51:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:04.046 20:51:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.046 20:51:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:04.046 20:51:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:04.046 20:51:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.046 20:51:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:04.046 20:51:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:04.046 20:51:28 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:20:04.046 20:51:28 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:04.046 20:51:28 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:04.046 20:51:28 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:04.046 net.core.busy_poll = 1 00:20:04.046 20:51:28 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:04.046 net.core.busy_read = 1 00:20:04.046 20:51:28 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:04.046 20:51:28 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:04.046 20:51:28 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:04.046 20:51:28 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:04.046 20:51:28 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:04.307 20:51:28 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:04.307 20:51:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:04.307 20:51:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:04.307 20:51:28 -- common/autotest_common.sh@10 -- # set +x 00:20:04.307 20:51:28 -- nvmf/common.sh@470 -- # nvmfpid=2823155 00:20:04.307 20:51:28 -- nvmf/common.sh@471 -- # waitforlisten 2823155 00:20:04.307 20:51:28 -- common/autotest_common.sh@817 -- # '[' -z 2823155 ']' 00:20:04.307 20:51:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:04.307 20:51:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.307 20:51:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:04.307 20:51:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.307 20:51:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:04.307 20:51:28 -- common/autotest_common.sh@10 -- # set +x 00:20:04.307 [2024-04-24 20:51:28.764133] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:20:04.307 [2024-04-24 20:51:28.764197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.307 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.307 [2024-04-24 20:51:28.852934] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.568 [2024-04-24 20:51:28.949104] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.568 [2024-04-24 20:51:28.949165] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.568 [2024-04-24 20:51:28.949173] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.568 [2024-04-24 20:51:28.949179] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.568 [2024-04-24 20:51:28.949186] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.568 [2024-04-24 20:51:28.949334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.568 [2024-04-24 20:51:28.949463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.568 [2024-04-24 20:51:28.949630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.568 [2024-04-24 20:51:28.949631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.138 20:51:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:05.138 20:51:29 -- common/autotest_common.sh@850 -- # return 0 00:20:05.138 20:51:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:05.138 20:51:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:05.138 20:51:29 -- common/autotest_common.sh@10 -- # set +x 00:20:05.138 20:51:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.138 20:51:29 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:20:05.138 20:51:29 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:05.138 20:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.138 20:51:29 -- common/autotest_common.sh@10 -- # set +x 00:20:05.138 20:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.138 20:51:29 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:20:05.138 20:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.138 20:51:29 -- common/autotest_common.sh@10 -- # set +x 00:20:05.138 20:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.138 20:51:29 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:05.138 20:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.138 20:51:29 -- common/autotest_common.sh@10 -- # set +x 00:20:05.138 [2024-04-24 20:51:29.763683] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.138 20:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.138 20:51:29 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:05.139 20:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.139 20:51:29 -- common/autotest_common.sh@10 -- # set +x 00:20:05.399 Malloc1 00:20:05.399 20:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.399 20:51:29 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.399 20:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.399 20:51:29 -- common/autotest_common.sh@10 -- # set +x 00:20:05.399 20:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.399 20:51:29 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:05.399 20:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.399 20:51:29 -- common/autotest_common.sh@10 -- # set +x 00:20:05.399 20:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.399 20:51:29 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.399 20:51:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.399 20:51:29 -- common/autotest_common.sh@10 -- # set +x 00:20:05.399 [2024-04-24 20:51:29.819014] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.399 20:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.399 20:51:29 -- target/perf_adq.sh@94 -- # perfpid=2823290 00:20:05.399 20:51:29 -- target/perf_adq.sh@95 -- # sleep 2 00:20:05.399 20:51:29 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:05.399 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.312 20:51:31 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:20:07.312 20:51:31 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:07.312 20:51:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:07.312 20:51:31 -- target/perf_adq.sh@97 -- # wc -l 00:20:07.312 20:51:31 -- common/autotest_common.sh@10 -- # set +x 00:20:07.312 20:51:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:07.312 20:51:31 -- target/perf_adq.sh@97 -- # count=2 00:20:07.312 20:51:31 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:20:07.312 20:51:31 -- target/perf_adq.sh@103 -- # wait 2823290 00:20:15.453 Initializing NVMe Controllers 00:20:15.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:15.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:15.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:15.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:15.453 Initialization complete. Launching workers. 00:20:15.453 ======================================================== 00:20:15.453 Latency(us) 00:20:15.453 Device Information : IOPS MiB/s Average min max 00:20:15.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9225.34 36.04 6938.18 1553.20 52247.05 00:20:15.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11282.73 44.07 5672.80 1306.42 49238.82 00:20:15.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5657.76 22.10 11319.19 1826.50 54047.44 00:20:15.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9340.64 36.49 6851.90 1055.06 50264.41 00:20:15.453 ======================================================== 00:20:15.453 Total : 35506.47 138.70 7211.48 1055.06 54047.44 00:20:15.453 00:20:15.453 20:51:39 -- target/perf_adq.sh@104 -- # nvmftestfini 00:20:15.453 20:51:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:15.453 20:51:39 -- nvmf/common.sh@117 -- # sync 00:20:15.453 20:51:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.453 20:51:39 -- nvmf/common.sh@120 -- # set +e 00:20:15.453 20:51:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.453 20:51:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.453 rmmod nvme_tcp 00:20:15.453 rmmod nvme_fabrics 00:20:15.453 rmmod nvme_keyring 00:20:15.453 20:51:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.453 20:51:40 -- nvmf/common.sh@124 -- # set -e 00:20:15.453 20:51:40 -- nvmf/common.sh@125 -- # return 0 00:20:15.453 20:51:40 -- nvmf/common.sh@478 -- # '[' -n 2823155 ']' 00:20:15.453 20:51:40 -- nvmf/common.sh@479 -- # killprocess 2823155 00:20:15.453 20:51:40 -- common/autotest_common.sh@936 -- # '[' -z 2823155 ']' 00:20:15.453 20:51:40 -- common/autotest_common.sh@940 -- # kill -0 2823155 00:20:15.453 20:51:40 -- common/autotest_common.sh@941 -- # uname 00:20:15.453 20:51:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:15.453 20:51:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2823155 00:20:15.713 20:51:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:15.713 20:51:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:15.713 20:51:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2823155' 00:20:15.713 killing process with pid 2823155 00:20:15.713 20:51:40 -- common/autotest_common.sh@955 -- # kill 2823155 00:20:15.713 20:51:40 -- common/autotest_common.sh@960 -- # wait 2823155 00:20:15.713 20:51:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:15.713 20:51:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:15.713 20:51:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:15.713 20:51:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.713 20:51:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.713 20:51:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.713 20:51:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.713 20:51:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.254 20:51:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:18.254 20:51:42 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:20:18.254 00:20:18.254 real 0m52.387s 00:20:18.254 user 2m49.357s 00:20:18.254 sys 0m10.628s 00:20:18.254 20:51:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:18.254 20:51:42 -- common/autotest_common.sh@10 -- # set +x 00:20:18.254 ************************************ 00:20:18.254 END TEST nvmf_perf_adq 00:20:18.254 ************************************ 00:20:18.254 20:51:42 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:18.254 20:51:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:18.254 20:51:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:18.254 20:51:42 -- common/autotest_common.sh@10 -- # set +x 00:20:18.254 ************************************ 00:20:18.254 START TEST nvmf_shutdown 00:20:18.254 ************************************ 00:20:18.254 20:51:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:18.254 * Looking for test storage... 00:20:18.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:18.254 20:51:42 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.254 20:51:42 -- nvmf/common.sh@7 -- # uname -s 00:20:18.254 20:51:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.254 20:51:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.254 20:51:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.254 20:51:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.254 20:51:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.254 20:51:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.254 20:51:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.254 20:51:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.254 20:51:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.254 20:51:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.254 20:51:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:18.254 20:51:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:18.254 20:51:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.254 20:51:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.254 20:51:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.254 20:51:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.255 20:51:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.255 20:51:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.255 20:51:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.255 20:51:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.255 20:51:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.255 20:51:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.255 20:51:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.255 20:51:42 -- paths/export.sh@5 -- # export PATH 00:20:18.255 20:51:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.255 20:51:42 -- nvmf/common.sh@47 -- # : 0 00:20:18.255 20:51:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:18.255 20:51:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:18.255 20:51:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.255 20:51:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.255 20:51:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.255 20:51:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.255 20:51:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.255 20:51:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.255 20:51:42 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:18.255 20:51:42 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:18.255 20:51:42 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:18.255 20:51:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:18.255 20:51:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:18.255 20:51:42 -- common/autotest_common.sh@10 -- # set +x 00:20:18.255 ************************************ 00:20:18.255 START TEST nvmf_shutdown_tc1 00:20:18.255 ************************************ 00:20:18.255 20:51:42 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc1 00:20:18.255 20:51:42 -- target/shutdown.sh@74 -- # starttarget 00:20:18.255 20:51:42 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:18.255 20:51:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:18.255 20:51:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.255 20:51:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:18.255 20:51:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:18.255 20:51:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:18.255 20:51:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.255 20:51:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.255 20:51:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.255 20:51:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:18.255 20:51:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:18.255 20:51:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:18.255 20:51:42 -- common/autotest_common.sh@10 -- # set +x 00:20:26.398 20:51:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:26.398 20:51:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.398 20:51:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.398 20:51:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.398 20:51:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.398 20:51:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.398 20:51:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.398 20:51:49 -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.398 20:51:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.398 20:51:49 -- nvmf/common.sh@296 -- # e810=() 00:20:26.398 20:51:49 -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.398 20:51:49 -- nvmf/common.sh@297 -- # x722=() 00:20:26.398 20:51:49 -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.398 20:51:49 -- nvmf/common.sh@298 -- # mlx=() 00:20:26.398 20:51:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.398 20:51:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.398 20:51:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.398 20:51:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.398 20:51:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.398 20:51:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.398 20:51:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.398 20:51:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.398 20:51:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.398 20:51:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.398 20:51:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.398 20:51:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.398 20:51:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.398 20:51:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.398 20:51:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.398 20:51:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.398 20:51:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.398 20:51:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.398 20:51:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.398 20:51:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:26.398 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:26.398 20:51:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.398 20:51:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.398 20:51:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.398 20:51:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.398 20:51:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.398 20:51:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.398 20:51:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:26.398 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:26.399 20:51:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.399 20:51:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.399 20:51:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.399 20:51:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.399 20:51:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.399 20:51:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.399 20:51:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.399 20:51:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.399 20:51:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.399 20:51:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.399 20:51:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:26.399 20:51:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.399 20:51:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:26.399 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:26.399 20:51:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.399 20:51:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.399 20:51:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.399 20:51:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:26.399 20:51:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.399 20:51:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:26.399 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:26.399 20:51:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.399 20:51:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:26.399 20:51:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:26.399 20:51:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:26.399 20:51:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:26.399 20:51:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:26.399 20:51:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.399 20:51:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.399 20:51:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.399 20:51:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.399 20:51:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.399 20:51:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.399 20:51:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.399 20:51:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.399 20:51:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.399 20:51:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.399 20:51:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.399 20:51:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.399 20:51:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.399 20:51:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.399 20:51:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.399 20:51:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.399 20:51:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.399 20:51:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.399 20:51:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.399 20:51:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:20:26.399 00:20:26.399 --- 10.0.0.2 ping statistics --- 00:20:26.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.399 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:20:26.399 20:51:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:20:26.399 00:20:26.399 --- 10.0.0.1 ping statistics --- 00:20:26.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.399 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:20:26.399 20:51:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.399 20:51:49 -- nvmf/common.sh@411 -- # return 0 00:20:26.399 20:51:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:26.399 20:51:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.399 20:51:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:26.399 20:51:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:26.399 20:51:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.399 20:51:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:26.399 20:51:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:26.399 20:51:49 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:26.399 20:51:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:26.399 20:51:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:26.399 20:51:49 -- common/autotest_common.sh@10 -- # set +x 00:20:26.399 20:51:49 -- nvmf/common.sh@470 -- # nvmfpid=2829743 00:20:26.399 20:51:49 -- nvmf/common.sh@471 -- # waitforlisten 2829743 00:20:26.399 20:51:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:26.399 20:51:49 -- common/autotest_common.sh@817 -- # '[' -z 2829743 ']' 00:20:26.399 20:51:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.399 20:51:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:26.399 20:51:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.399 20:51:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:26.399 20:51:49 -- common/autotest_common.sh@10 -- # set +x 00:20:26.399 [2024-04-24 20:51:49.990891] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:20:26.399 [2024-04-24 20:51:49.990943] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.399 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.399 [2024-04-24 20:51:50.058972] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.399 [2024-04-24 20:51:50.123652] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.399 [2024-04-24 20:51:50.123690] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.399 [2024-04-24 20:51:50.123699] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.399 [2024-04-24 20:51:50.123707] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.399 [2024-04-24 20:51:50.123713] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.399 [2024-04-24 20:51:50.123864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.399 [2024-04-24 20:51:50.124115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.399 [2024-04-24 20:51:50.124260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.399 [2024-04-24 20:51:50.124260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:26.399 20:51:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:26.399 20:51:50 -- common/autotest_common.sh@850 -- # return 0 00:20:26.399 20:51:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:26.399 20:51:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:26.399 20:51:50 -- common/autotest_common.sh@10 -- # set +x 00:20:26.399 20:51:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.399 20:51:50 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.400 20:51:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.400 20:51:50 -- common/autotest_common.sh@10 -- # set +x 00:20:26.400 [2024-04-24 20:51:50.271544] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.400 20:51:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.400 20:51:50 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:26.400 20:51:50 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:26.400 20:51:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:26.400 20:51:50 -- common/autotest_common.sh@10 -- # set +x 00:20:26.400 20:51:50 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:26.400 20:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.400 20:51:50 -- target/shutdown.sh@28 -- # cat 00:20:26.400 20:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.400 20:51:50 -- target/shutdown.sh@28 -- # cat 00:20:26.400 20:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.400 20:51:50 -- target/shutdown.sh@28 -- # cat 00:20:26.400 20:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.400 20:51:50 -- target/shutdown.sh@28 -- # cat 00:20:26.400 20:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.400 20:51:50 -- target/shutdown.sh@28 -- # cat 00:20:26.400 20:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.400 20:51:50 -- target/shutdown.sh@28 -- # cat 00:20:26.400 20:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.400 20:51:50 -- target/shutdown.sh@28 -- # cat 00:20:26.400 20:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.400 20:51:50 -- target/shutdown.sh@28 -- # cat 00:20:26.400 20:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.400 20:51:50 -- target/shutdown.sh@28 -- # cat 00:20:26.400 20:51:50 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:26.400 20:51:50 -- target/shutdown.sh@28 -- # cat 00:20:26.400 20:51:50 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:26.400 20:51:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:26.400 20:51:50 -- common/autotest_common.sh@10 -- # set +x 00:20:26.400 Malloc1 00:20:26.400 [2024-04-24 20:51:50.374985] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.400 Malloc2 00:20:26.400 Malloc3 00:20:26.400 Malloc4 00:20:26.400 Malloc5 00:20:26.400 Malloc6 00:20:26.400 Malloc7 00:20:26.400 Malloc8 00:20:26.400 Malloc9 00:20:26.400 Malloc10 00:20:26.400 20:51:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:26.400 20:51:50 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:26.400 20:51:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:26.400 20:51:50 -- common/autotest_common.sh@10 -- # set +x 00:20:26.400 20:51:50 -- target/shutdown.sh@78 -- # perfpid=2829802 00:20:26.400 20:51:50 -- target/shutdown.sh@79 -- # waitforlisten 2829802 /var/tmp/bdevperf.sock 00:20:26.400 20:51:50 -- common/autotest_common.sh@817 -- # '[' -z 2829802 ']' 00:20:26.400 20:51:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.400 20:51:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:26.400 20:51:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.400 20:51:50 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:26.400 20:51:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:26.400 20:51:50 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:26.400 20:51:50 -- common/autotest_common.sh@10 -- # set +x 00:20:26.400 20:51:50 -- nvmf/common.sh@521 -- # config=() 00:20:26.400 20:51:50 -- nvmf/common.sh@521 -- # local subsystem config 00:20:26.400 20:51:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.400 20:51:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.400 { 00:20:26.400 "params": { 00:20:26.400 "name": "Nvme$subsystem", 00:20:26.400 "trtype": "$TEST_TRANSPORT", 00:20:26.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.400 "adrfam": "ipv4", 00:20:26.400 "trsvcid": "$NVMF_PORT", 00:20:26.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.400 "hdgst": ${hdgst:-false}, 00:20:26.400 "ddgst": ${ddgst:-false} 00:20:26.400 }, 00:20:26.400 "method": "bdev_nvme_attach_controller" 00:20:26.400 } 00:20:26.400 EOF 00:20:26.400 )") 00:20:26.400 20:51:50 -- nvmf/common.sh@543 -- # cat 00:20:26.400 20:51:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.400 20:51:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.400 { 00:20:26.400 "params": { 00:20:26.400 "name": "Nvme$subsystem", 00:20:26.400 "trtype": "$TEST_TRANSPORT", 00:20:26.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.400 "adrfam": "ipv4", 00:20:26.400 "trsvcid": "$NVMF_PORT", 00:20:26.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.400 "hdgst": ${hdgst:-false}, 00:20:26.400 "ddgst": ${ddgst:-false} 00:20:26.400 }, 00:20:26.400 "method": "bdev_nvme_attach_controller" 00:20:26.400 } 00:20:26.400 EOF 00:20:26.400 )") 00:20:26.400 20:51:50 -- nvmf/common.sh@543 -- # cat 00:20:26.400 20:51:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.400 20:51:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.400 { 00:20:26.400 "params": { 00:20:26.400 "name": "Nvme$subsystem", 00:20:26.400 "trtype": "$TEST_TRANSPORT", 00:20:26.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.400 "adrfam": "ipv4", 00:20:26.400 "trsvcid": "$NVMF_PORT", 00:20:26.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.400 "hdgst": ${hdgst:-false}, 00:20:26.400 "ddgst": ${ddgst:-false} 00:20:26.400 }, 00:20:26.400 "method": "bdev_nvme_attach_controller" 00:20:26.400 } 00:20:26.400 EOF 00:20:26.400 )") 00:20:26.400 20:51:50 -- nvmf/common.sh@543 -- # cat 00:20:26.400 20:51:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.400 20:51:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.400 { 00:20:26.400 "params": { 00:20:26.400 "name": "Nvme$subsystem", 00:20:26.400 "trtype": "$TEST_TRANSPORT", 00:20:26.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.400 "adrfam": "ipv4", 00:20:26.400 "trsvcid": "$NVMF_PORT", 00:20:26.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.400 "hdgst": ${hdgst:-false}, 00:20:26.400 "ddgst": ${ddgst:-false} 00:20:26.400 }, 00:20:26.400 "method": "bdev_nvme_attach_controller" 00:20:26.400 } 00:20:26.400 EOF 00:20:26.400 )") 00:20:26.400 20:51:50 -- nvmf/common.sh@543 -- # cat 00:20:26.400 20:51:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.400 20:51:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.400 { 00:20:26.400 "params": { 00:20:26.400 "name": "Nvme$subsystem", 00:20:26.400 "trtype": "$TEST_TRANSPORT", 00:20:26.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.400 "adrfam": "ipv4", 00:20:26.400 "trsvcid": "$NVMF_PORT", 00:20:26.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.400 "hdgst": ${hdgst:-false}, 00:20:26.400 "ddgst": ${ddgst:-false} 00:20:26.400 }, 00:20:26.400 "method": "bdev_nvme_attach_controller" 00:20:26.400 } 00:20:26.400 EOF 00:20:26.400 )") 00:20:26.400 20:51:50 -- nvmf/common.sh@543 -- # cat 00:20:26.400 20:51:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.400 20:51:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.401 { 00:20:26.401 "params": { 00:20:26.401 "name": "Nvme$subsystem", 00:20:26.401 "trtype": "$TEST_TRANSPORT", 00:20:26.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.401 "adrfam": "ipv4", 00:20:26.401 "trsvcid": "$NVMF_PORT", 00:20:26.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.401 "hdgst": ${hdgst:-false}, 00:20:26.401 "ddgst": ${ddgst:-false} 00:20:26.401 }, 00:20:26.401 "method": "bdev_nvme_attach_controller" 00:20:26.401 } 00:20:26.401 EOF 00:20:26.401 )") 00:20:26.401 20:51:50 -- nvmf/common.sh@543 -- # cat 00:20:26.401 [2024-04-24 20:51:50.825787] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:20:26.401 [2024-04-24 20:51:50.825838] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:26.401 20:51:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.401 20:51:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.401 { 00:20:26.401 "params": { 00:20:26.401 "name": "Nvme$subsystem", 00:20:26.401 "trtype": "$TEST_TRANSPORT", 00:20:26.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.401 "adrfam": "ipv4", 00:20:26.401 "trsvcid": "$NVMF_PORT", 00:20:26.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.401 "hdgst": ${hdgst:-false}, 00:20:26.401 "ddgst": ${ddgst:-false} 00:20:26.401 }, 00:20:26.401 "method": "bdev_nvme_attach_controller" 00:20:26.401 } 00:20:26.401 EOF 00:20:26.401 )") 00:20:26.401 20:51:50 -- nvmf/common.sh@543 -- # cat 00:20:26.401 20:51:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.401 20:51:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.401 { 00:20:26.401 "params": { 00:20:26.401 "name": "Nvme$subsystem", 00:20:26.401 "trtype": "$TEST_TRANSPORT", 00:20:26.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.401 "adrfam": "ipv4", 00:20:26.401 "trsvcid": "$NVMF_PORT", 00:20:26.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.401 "hdgst": ${hdgst:-false}, 00:20:26.401 "ddgst": ${ddgst:-false} 00:20:26.401 }, 00:20:26.401 "method": "bdev_nvme_attach_controller" 00:20:26.401 } 00:20:26.401 EOF 00:20:26.401 )") 00:20:26.401 20:51:50 -- nvmf/common.sh@543 -- # cat 00:20:26.401 20:51:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.401 20:51:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.401 { 00:20:26.401 "params": { 00:20:26.401 "name": "Nvme$subsystem", 00:20:26.401 "trtype": "$TEST_TRANSPORT", 00:20:26.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.401 "adrfam": "ipv4", 00:20:26.401 "trsvcid": "$NVMF_PORT", 00:20:26.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.401 "hdgst": ${hdgst:-false}, 00:20:26.401 "ddgst": ${ddgst:-false} 00:20:26.401 }, 00:20:26.401 "method": "bdev_nvme_attach_controller" 00:20:26.401 } 00:20:26.401 EOF 00:20:26.401 )") 00:20:26.401 20:51:50 -- nvmf/common.sh@543 -- # cat 00:20:26.401 20:51:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:26.401 20:51:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:26.401 { 00:20:26.401 "params": { 00:20:26.401 "name": "Nvme$subsystem", 00:20:26.401 "trtype": "$TEST_TRANSPORT", 00:20:26.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.401 "adrfam": "ipv4", 00:20:26.401 "trsvcid": "$NVMF_PORT", 00:20:26.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.401 "hdgst": ${hdgst:-false}, 00:20:26.401 "ddgst": ${ddgst:-false} 00:20:26.401 }, 00:20:26.401 "method": "bdev_nvme_attach_controller" 00:20:26.401 } 00:20:26.401 EOF 00:20:26.401 )") 00:20:26.401 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.401 20:51:50 -- nvmf/common.sh@543 -- # cat 00:20:26.401 20:51:50 -- nvmf/common.sh@545 -- # jq . 00:20:26.401 20:51:50 -- nvmf/common.sh@546 -- # IFS=, 00:20:26.401 20:51:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:26.401 "params": { 00:20:26.401 "name": "Nvme1", 00:20:26.401 "trtype": "tcp", 00:20:26.401 "traddr": "10.0.0.2", 00:20:26.401 "adrfam": "ipv4", 00:20:26.401 "trsvcid": "4420", 00:20:26.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.401 "hdgst": false, 00:20:26.401 "ddgst": false 00:20:26.401 }, 00:20:26.401 "method": "bdev_nvme_attach_controller" 00:20:26.401 },{ 00:20:26.401 "params": { 00:20:26.401 "name": "Nvme2", 00:20:26.401 "trtype": "tcp", 00:20:26.401 "traddr": "10.0.0.2", 00:20:26.401 "adrfam": "ipv4", 00:20:26.401 "trsvcid": "4420", 00:20:26.401 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:26.401 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:26.401 "hdgst": false, 00:20:26.401 "ddgst": false 00:20:26.401 }, 00:20:26.401 "method": "bdev_nvme_attach_controller" 00:20:26.401 },{ 00:20:26.401 "params": { 00:20:26.401 "name": "Nvme3", 00:20:26.401 "trtype": "tcp", 00:20:26.401 "traddr": "10.0.0.2", 00:20:26.401 "adrfam": "ipv4", 00:20:26.401 "trsvcid": "4420", 00:20:26.401 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:26.401 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:26.401 "hdgst": false, 00:20:26.401 "ddgst": false 00:20:26.401 }, 00:20:26.401 "method": "bdev_nvme_attach_controller" 00:20:26.401 },{ 00:20:26.401 "params": { 00:20:26.401 "name": "Nvme4", 00:20:26.401 "trtype": "tcp", 00:20:26.401 "traddr": "10.0.0.2", 00:20:26.401 "adrfam": "ipv4", 00:20:26.401 "trsvcid": "4420", 00:20:26.401 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:26.401 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:26.401 "hdgst": false, 00:20:26.401 "ddgst": false 00:20:26.401 }, 00:20:26.401 "method": "bdev_nvme_attach_controller" 00:20:26.401 },{ 00:20:26.401 "params": { 00:20:26.401 "name": "Nvme5", 00:20:26.401 "trtype": "tcp", 00:20:26.401 "traddr": "10.0.0.2", 00:20:26.401 "adrfam": "ipv4", 00:20:26.401 "trsvcid": "4420", 00:20:26.401 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:26.401 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:26.401 "hdgst": false, 00:20:26.401 "ddgst": false 00:20:26.401 }, 00:20:26.401 "method": "bdev_nvme_attach_controller" 00:20:26.401 },{ 00:20:26.401 "params": { 00:20:26.401 "name": "Nvme6", 00:20:26.401 "trtype": "tcp", 00:20:26.401 "traddr": "10.0.0.2", 00:20:26.401 "adrfam": "ipv4", 00:20:26.401 "trsvcid": "4420", 00:20:26.401 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:26.401 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:26.401 "hdgst": false, 00:20:26.401 "ddgst": false 00:20:26.401 }, 00:20:26.401 "method": "bdev_nvme_attach_controller" 00:20:26.401 },{ 00:20:26.401 "params": { 00:20:26.401 "name": "Nvme7", 00:20:26.401 "trtype": "tcp", 00:20:26.401 "traddr": "10.0.0.2", 00:20:26.401 "adrfam": "ipv4", 00:20:26.401 "trsvcid": "4420", 00:20:26.401 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:26.401 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:26.401 "hdgst": false, 00:20:26.401 "ddgst": false 00:20:26.401 }, 00:20:26.402 "method": "bdev_nvme_attach_controller" 00:20:26.402 },{ 00:20:26.402 "params": { 00:20:26.402 "name": "Nvme8", 00:20:26.402 "trtype": "tcp", 00:20:26.402 "traddr": "10.0.0.2", 00:20:26.402 "adrfam": "ipv4", 00:20:26.402 "trsvcid": "4420", 00:20:26.402 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:26.402 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:26.402 "hdgst": false, 00:20:26.402 "ddgst": false 00:20:26.402 }, 00:20:26.402 "method": "bdev_nvme_attach_controller" 00:20:26.402 },{ 00:20:26.402 "params": { 00:20:26.402 "name": "Nvme9", 00:20:26.402 "trtype": "tcp", 00:20:26.402 "traddr": "10.0.0.2", 00:20:26.402 "adrfam": "ipv4", 00:20:26.402 "trsvcid": "4420", 00:20:26.402 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:26.402 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:26.402 "hdgst": false, 00:20:26.402 "ddgst": false 00:20:26.402 }, 00:20:26.402 "method": "bdev_nvme_attach_controller" 00:20:26.402 },{ 00:20:26.402 "params": { 00:20:26.402 "name": "Nvme10", 00:20:26.402 "trtype": "tcp", 00:20:26.402 "traddr": "10.0.0.2", 00:20:26.402 "adrfam": "ipv4", 00:20:26.402 "trsvcid": "4420", 00:20:26.402 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:26.402 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:26.402 "hdgst": false, 00:20:26.402 "ddgst": false 00:20:26.402 }, 00:20:26.402 "method": "bdev_nvme_attach_controller" 00:20:26.402 }' 00:20:26.402 [2024-04-24 20:51:50.902501] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.402 [2024-04-24 20:51:50.966397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.859 20:51:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:27.859 20:51:52 -- common/autotest_common.sh@850 -- # return 0 00:20:27.859 20:51:52 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:27.859 20:51:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:27.859 20:51:52 -- common/autotest_common.sh@10 -- # set +x 00:20:27.859 20:51:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:27.859 20:51:52 -- target/shutdown.sh@83 -- # kill -9 2829802 00:20:27.859 20:51:52 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:27.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2829802 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:27.859 20:51:52 -- target/shutdown.sh@87 -- # sleep 1 00:20:28.803 20:51:53 -- target/shutdown.sh@88 -- # kill -0 2829743 00:20:28.803 20:51:53 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:28.803 20:51:53 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:28.803 20:51:53 -- nvmf/common.sh@521 -- # config=() 00:20:28.803 20:51:53 -- nvmf/common.sh@521 -- # local subsystem config 00:20:28.803 20:51:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.803 { 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme$subsystem", 00:20:28.803 "trtype": "$TEST_TRANSPORT", 00:20:28.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "$NVMF_PORT", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.803 "hdgst": ${hdgst:-false}, 00:20:28.803 "ddgst": ${ddgst:-false} 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 } 00:20:28.803 EOF 00:20:28.803 )") 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # cat 00:20:28.803 20:51:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.803 { 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme$subsystem", 00:20:28.803 "trtype": "$TEST_TRANSPORT", 00:20:28.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "$NVMF_PORT", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.803 "hdgst": ${hdgst:-false}, 00:20:28.803 "ddgst": ${ddgst:-false} 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 } 00:20:28.803 EOF 00:20:28.803 )") 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # cat 00:20:28.803 20:51:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.803 { 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme$subsystem", 00:20:28.803 "trtype": "$TEST_TRANSPORT", 00:20:28.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "$NVMF_PORT", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.803 "hdgst": ${hdgst:-false}, 00:20:28.803 "ddgst": ${ddgst:-false} 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 } 00:20:28.803 EOF 00:20:28.803 )") 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # cat 00:20:28.803 20:51:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.803 { 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme$subsystem", 00:20:28.803 "trtype": "$TEST_TRANSPORT", 00:20:28.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "$NVMF_PORT", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.803 "hdgst": ${hdgst:-false}, 00:20:28.803 "ddgst": ${ddgst:-false} 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 } 00:20:28.803 EOF 00:20:28.803 )") 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # cat 00:20:28.803 20:51:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.803 { 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme$subsystem", 00:20:28.803 "trtype": "$TEST_TRANSPORT", 00:20:28.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "$NVMF_PORT", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.803 "hdgst": ${hdgst:-false}, 00:20:28.803 "ddgst": ${ddgst:-false} 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 } 00:20:28.803 EOF 00:20:28.803 )") 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # cat 00:20:28.803 20:51:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.803 { 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme$subsystem", 00:20:28.803 "trtype": "$TEST_TRANSPORT", 00:20:28.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "$NVMF_PORT", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.803 "hdgst": ${hdgst:-false}, 00:20:28.803 "ddgst": ${ddgst:-false} 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 } 00:20:28.803 EOF 00:20:28.803 )") 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # cat 00:20:28.803 20:51:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.803 { 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme$subsystem", 00:20:28.803 "trtype": "$TEST_TRANSPORT", 00:20:28.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "$NVMF_PORT", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.803 "hdgst": ${hdgst:-false}, 00:20:28.803 "ddgst": ${ddgst:-false} 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 } 00:20:28.803 EOF 00:20:28.803 )") 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # cat 00:20:28.803 [2024-04-24 20:51:53.263998] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:20:28.803 [2024-04-24 20:51:53.264069] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2830419 ] 00:20:28.803 20:51:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.803 { 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme$subsystem", 00:20:28.803 "trtype": "$TEST_TRANSPORT", 00:20:28.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "$NVMF_PORT", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.803 "hdgst": ${hdgst:-false}, 00:20:28.803 "ddgst": ${ddgst:-false} 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 } 00:20:28.803 EOF 00:20:28.803 )") 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # cat 00:20:28.803 20:51:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.803 { 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme$subsystem", 00:20:28.803 "trtype": "$TEST_TRANSPORT", 00:20:28.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "$NVMF_PORT", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.803 "hdgst": ${hdgst:-false}, 00:20:28.803 "ddgst": ${ddgst:-false} 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 } 00:20:28.803 EOF 00:20:28.803 )") 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # cat 00:20:28.803 20:51:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:28.803 { 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme$subsystem", 00:20:28.803 "trtype": "$TEST_TRANSPORT", 00:20:28.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "$NVMF_PORT", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:28.803 "hdgst": ${hdgst:-false}, 00:20:28.803 "ddgst": ${ddgst:-false} 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 } 00:20:28.803 EOF 00:20:28.803 )") 00:20:28.803 20:51:53 -- nvmf/common.sh@543 -- # cat 00:20:28.803 20:51:53 -- nvmf/common.sh@545 -- # jq . 00:20:28.803 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.803 20:51:53 -- nvmf/common.sh@546 -- # IFS=, 00:20:28.803 20:51:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme1", 00:20:28.803 "trtype": "tcp", 00:20:28.803 "traddr": "10.0.0.2", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "4420", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:28.803 "hdgst": false, 00:20:28.803 "ddgst": false 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 },{ 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme2", 00:20:28.803 "trtype": "tcp", 00:20:28.803 "traddr": "10.0.0.2", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "4420", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:28.803 "hdgst": false, 00:20:28.803 "ddgst": false 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 },{ 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme3", 00:20:28.803 "trtype": "tcp", 00:20:28.803 "traddr": "10.0.0.2", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "4420", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:28.803 "hdgst": false, 00:20:28.803 "ddgst": false 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 },{ 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme4", 00:20:28.803 "trtype": "tcp", 00:20:28.803 "traddr": "10.0.0.2", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "4420", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:28.803 "hdgst": false, 00:20:28.803 "ddgst": false 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 },{ 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme5", 00:20:28.803 "trtype": "tcp", 00:20:28.803 "traddr": "10.0.0.2", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "4420", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:28.803 "hdgst": false, 00:20:28.803 "ddgst": false 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 },{ 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme6", 00:20:28.803 "trtype": "tcp", 00:20:28.803 "traddr": "10.0.0.2", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "4420", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:28.803 "hdgst": false, 00:20:28.803 "ddgst": false 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 },{ 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme7", 00:20:28.803 "trtype": "tcp", 00:20:28.803 "traddr": "10.0.0.2", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "4420", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:28.803 "hdgst": false, 00:20:28.803 "ddgst": false 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 },{ 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme8", 00:20:28.803 "trtype": "tcp", 00:20:28.803 "traddr": "10.0.0.2", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "4420", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:28.803 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:28.803 "hdgst": false, 00:20:28.803 "ddgst": false 00:20:28.803 }, 00:20:28.803 "method": "bdev_nvme_attach_controller" 00:20:28.803 },{ 00:20:28.803 "params": { 00:20:28.803 "name": "Nvme9", 00:20:28.803 "trtype": "tcp", 00:20:28.803 "traddr": "10.0.0.2", 00:20:28.803 "adrfam": "ipv4", 00:20:28.803 "trsvcid": "4420", 00:20:28.803 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:28.804 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:28.804 "hdgst": false, 00:20:28.804 "ddgst": false 00:20:28.804 }, 00:20:28.804 "method": "bdev_nvme_attach_controller" 00:20:28.804 },{ 00:20:28.804 "params": { 00:20:28.804 "name": "Nvme10", 00:20:28.804 "trtype": "tcp", 00:20:28.804 "traddr": "10.0.0.2", 00:20:28.804 "adrfam": "ipv4", 00:20:28.804 "trsvcid": "4420", 00:20:28.804 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:28.804 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:28.804 "hdgst": false, 00:20:28.804 "ddgst": false 00:20:28.804 }, 00:20:28.804 "method": "bdev_nvme_attach_controller" 00:20:28.804 }' 00:20:28.804 [2024-04-24 20:51:53.344116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.804 [2024-04-24 20:51:53.406128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.190 Running I/O for 1 seconds... 00:20:31.576 00:20:31.576 Latency(us) 00:20:31.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.576 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.576 Verification LBA range: start 0x0 length 0x400 00:20:31.576 Nvme1n1 : 1.11 230.56 14.41 0.00 0.00 271134.08 20097.71 248162.99 00:20:31.576 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.576 Verification LBA range: start 0x0 length 0x400 00:20:31.576 Nvme2n1 : 1.07 243.25 15.20 0.00 0.00 253942.98 2771.63 235929.60 00:20:31.576 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.576 Verification LBA range: start 0x0 length 0x400 00:20:31.576 Nvme3n1 : 1.08 237.81 14.86 0.00 0.00 256560.00 19005.44 242920.11 00:20:31.576 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.576 Verification LBA range: start 0x0 length 0x400 00:20:31.576 Nvme4n1 : 1.08 245.31 15.33 0.00 0.00 241947.65 2853.55 244667.73 00:20:31.576 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.576 Verification LBA range: start 0x0 length 0x400 00:20:31.576 Nvme5n1 : 1.08 236.22 14.76 0.00 0.00 248708.91 15400.96 248162.99 00:20:31.576 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.576 Verification LBA range: start 0x0 length 0x400 00:20:31.576 Nvme6n1 : 1.15 278.78 17.42 0.00 0.00 207755.26 20206.93 244667.73 00:20:31.576 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.576 Verification LBA range: start 0x0 length 0x400 00:20:31.576 Nvme7n1 : 1.19 269.85 16.87 0.00 0.00 211631.10 14527.15 248162.99 00:20:31.576 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.576 Verification LBA range: start 0x0 length 0x400 00:20:31.576 Nvme8n1 : 1.14 224.18 14.01 0.00 0.00 248638.51 19005.44 249910.61 00:20:31.576 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.576 Verification LBA range: start 0x0 length 0x400 00:20:31.576 Nvme9n1 : 1.18 271.88 16.99 0.00 0.00 202085.72 14745.60 244667.73 00:20:31.576 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:31.576 Verification LBA range: start 0x0 length 0x400 00:20:31.576 Nvme10n1 : 1.20 267.46 16.72 0.00 0.00 202317.82 10649.60 267386.88 00:20:31.576 =================================================================================================================== 00:20:31.576 Total : 2505.29 156.58 0.00 0.00 231950.05 2771.63 267386.88 00:20:31.576 20:51:56 -- target/shutdown.sh@94 -- # stoptarget 00:20:31.576 20:51:56 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:31.576 20:51:56 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:31.576 20:51:56 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:31.576 20:51:56 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:31.576 20:51:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:31.576 20:51:56 -- nvmf/common.sh@117 -- # sync 00:20:31.576 20:51:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.576 20:51:56 -- nvmf/common.sh@120 -- # set +e 00:20:31.576 20:51:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.576 20:51:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.576 rmmod nvme_tcp 00:20:31.576 rmmod nvme_fabrics 00:20:31.576 rmmod nvme_keyring 00:20:31.576 20:51:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.576 20:51:56 -- nvmf/common.sh@124 -- # set -e 00:20:31.576 20:51:56 -- nvmf/common.sh@125 -- # return 0 00:20:31.576 20:51:56 -- nvmf/common.sh@478 -- # '[' -n 2829743 ']' 00:20:31.576 20:51:56 -- nvmf/common.sh@479 -- # killprocess 2829743 00:20:31.576 20:51:56 -- common/autotest_common.sh@936 -- # '[' -z 2829743 ']' 00:20:31.576 20:51:56 -- common/autotest_common.sh@940 -- # kill -0 2829743 00:20:31.576 20:51:56 -- common/autotest_common.sh@941 -- # uname 00:20:31.576 20:51:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:31.576 20:51:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2829743 00:20:31.576 20:51:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:31.576 20:51:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:31.576 20:51:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2829743' 00:20:31.576 killing process with pid 2829743 00:20:31.576 20:51:56 -- common/autotest_common.sh@955 -- # kill 2829743 00:20:31.576 20:51:56 -- common/autotest_common.sh@960 -- # wait 2829743 00:20:31.837 20:51:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:31.837 20:51:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:31.837 20:51:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:31.837 20:51:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:31.837 20:51:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:31.837 20:51:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.837 20:51:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.837 20:51:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.382 20:51:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:34.382 00:20:34.382 real 0m15.726s 00:20:34.382 user 0m31.131s 00:20:34.382 sys 0m6.438s 00:20:34.382 20:51:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:34.382 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:20:34.382 ************************************ 00:20:34.382 END TEST nvmf_shutdown_tc1 00:20:34.382 ************************************ 00:20:34.382 20:51:58 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:34.382 20:51:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:34.382 20:51:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:34.382 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:20:34.382 ************************************ 00:20:34.382 START TEST nvmf_shutdown_tc2 00:20:34.382 ************************************ 00:20:34.382 20:51:58 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc2 00:20:34.382 20:51:58 -- target/shutdown.sh@99 -- # starttarget 00:20:34.382 20:51:58 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:34.382 20:51:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:34.382 20:51:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.382 20:51:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:34.382 20:51:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:34.382 20:51:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:34.382 20:51:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.382 20:51:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:34.382 20:51:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.382 20:51:58 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:34.382 20:51:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:34.382 20:51:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.382 20:51:58 -- common/autotest_common.sh@10 -- # set +x 00:20:34.382 20:51:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:34.382 20:51:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:34.382 20:51:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:34.382 20:51:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:34.382 20:51:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:34.382 20:51:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:34.382 20:51:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:34.382 20:51:58 -- nvmf/common.sh@295 -- # net_devs=() 00:20:34.382 20:51:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:34.382 20:51:58 -- nvmf/common.sh@296 -- # e810=() 00:20:34.382 20:51:58 -- nvmf/common.sh@296 -- # local -ga e810 00:20:34.382 20:51:58 -- nvmf/common.sh@297 -- # x722=() 00:20:34.382 20:51:58 -- nvmf/common.sh@297 -- # local -ga x722 00:20:34.382 20:51:58 -- nvmf/common.sh@298 -- # mlx=() 00:20:34.382 20:51:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:34.383 20:51:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.383 20:51:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.383 20:51:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.383 20:51:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.383 20:51:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.383 20:51:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.383 20:51:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.383 20:51:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.383 20:51:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.383 20:51:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.383 20:51:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.383 20:51:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:34.383 20:51:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:34.383 20:51:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:34.383 20:51:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.383 20:51:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:34.383 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:34.383 20:51:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.383 20:51:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:34.383 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:34.383 20:51:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:34.383 20:51:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.383 20:51:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.383 20:51:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.383 20:51:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.383 20:51:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:34.383 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:34.383 20:51:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.383 20:51:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.383 20:51:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.383 20:51:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:34.383 20:51:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.383 20:51:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:34.383 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:34.383 20:51:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.383 20:51:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:34.383 20:51:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:34.383 20:51:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:34.383 20:51:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:34.383 20:51:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.383 20:51:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.383 20:51:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.383 20:51:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:34.383 20:51:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.383 20:51:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.383 20:51:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:34.383 20:51:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.383 20:51:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.383 20:51:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:34.383 20:51:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:34.383 20:51:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.383 20:51:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.383 20:51:58 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.383 20:51:58 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.383 20:51:58 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:34.383 20:51:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.644 20:51:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.644 20:51:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.644 20:51:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:34.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:20:34.644 00:20:34.644 --- 10.0.0.2 ping statistics --- 00:20:34.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.644 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:20:34.644 20:51:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:20:34.644 00:20:34.644 --- 10.0.0.1 ping statistics --- 00:20:34.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.644 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:20:34.644 20:51:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.644 20:51:59 -- nvmf/common.sh@411 -- # return 0 00:20:34.644 20:51:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:34.644 20:51:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.644 20:51:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:34.644 20:51:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:34.644 20:51:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.644 20:51:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:34.644 20:51:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:34.644 20:51:59 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:34.644 20:51:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:34.644 20:51:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:34.644 20:51:59 -- common/autotest_common.sh@10 -- # set +x 00:20:34.644 20:51:59 -- nvmf/common.sh@470 -- # nvmfpid=2831616 00:20:34.644 20:51:59 -- nvmf/common.sh@471 -- # waitforlisten 2831616 00:20:34.644 20:51:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:34.644 20:51:59 -- common/autotest_common.sh@817 -- # '[' -z 2831616 ']' 00:20:34.644 20:51:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.644 20:51:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:34.644 20:51:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.644 20:51:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:34.644 20:51:59 -- common/autotest_common.sh@10 -- # set +x 00:20:34.644 [2024-04-24 20:51:59.185994] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:20:34.644 [2024-04-24 20:51:59.186056] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.644 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.644 [2024-04-24 20:51:59.255649] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:34.905 [2024-04-24 20:51:59.327523] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:34.905 [2024-04-24 20:51:59.327562] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:34.905 [2024-04-24 20:51:59.327571] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:34.905 [2024-04-24 20:51:59.327579] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:34.905 [2024-04-24 20:51:59.327584] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:34.905 [2024-04-24 20:51:59.327721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.905 [2024-04-24 20:51:59.327839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:34.905 [2024-04-24 20:51:59.328088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:34.905 [2024-04-24 20:51:59.328088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.475 20:52:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:35.475 20:52:00 -- common/autotest_common.sh@850 -- # return 0 00:20:35.475 20:52:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:35.475 20:52:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:35.475 20:52:00 -- common/autotest_common.sh@10 -- # set +x 00:20:35.475 20:52:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.475 20:52:00 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.475 20:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.475 20:52:00 -- common/autotest_common.sh@10 -- # set +x 00:20:35.475 [2024-04-24 20:52:00.098969] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.475 20:52:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.475 20:52:00 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:35.475 20:52:00 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:35.475 20:52:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:35.475 20:52:00 -- common/autotest_common.sh@10 -- # set +x 00:20:35.475 20:52:00 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:35.735 20:52:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.735 20:52:00 -- target/shutdown.sh@28 -- # cat 00:20:35.735 20:52:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.735 20:52:00 -- target/shutdown.sh@28 -- # cat 00:20:35.735 20:52:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.735 20:52:00 -- target/shutdown.sh@28 -- # cat 00:20:35.735 20:52:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.735 20:52:00 -- target/shutdown.sh@28 -- # cat 00:20:35.735 20:52:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.735 20:52:00 -- target/shutdown.sh@28 -- # cat 00:20:35.735 20:52:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.735 20:52:00 -- target/shutdown.sh@28 -- # cat 00:20:35.735 20:52:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.735 20:52:00 -- target/shutdown.sh@28 -- # cat 00:20:35.735 20:52:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.735 20:52:00 -- target/shutdown.sh@28 -- # cat 00:20:35.735 20:52:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.736 20:52:00 -- target/shutdown.sh@28 -- # cat 00:20:35.736 20:52:00 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:35.736 20:52:00 -- target/shutdown.sh@28 -- # cat 00:20:35.736 20:52:00 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:35.736 20:52:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:35.736 20:52:00 -- common/autotest_common.sh@10 -- # set +x 00:20:35.736 Malloc1 00:20:35.736 [2024-04-24 20:52:00.202389] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:35.736 Malloc2 00:20:35.736 Malloc3 00:20:35.736 Malloc4 00:20:35.736 Malloc5 00:20:35.736 Malloc6 00:20:35.996 Malloc7 00:20:35.996 Malloc8 00:20:35.996 Malloc9 00:20:35.996 Malloc10 00:20:35.996 20:52:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:35.996 20:52:00 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:35.996 20:52:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:35.996 20:52:00 -- common/autotest_common.sh@10 -- # set +x 00:20:35.996 20:52:00 -- target/shutdown.sh@103 -- # perfpid=2831992 00:20:35.996 20:52:00 -- target/shutdown.sh@104 -- # waitforlisten 2831992 /var/tmp/bdevperf.sock 00:20:35.996 20:52:00 -- common/autotest_common.sh@817 -- # '[' -z 2831992 ']' 00:20:35.996 20:52:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.996 20:52:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:35.996 20:52:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.996 20:52:00 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:35.996 20:52:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:35.996 20:52:00 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:35.996 20:52:00 -- common/autotest_common.sh@10 -- # set +x 00:20:35.996 20:52:00 -- nvmf/common.sh@521 -- # config=() 00:20:35.996 20:52:00 -- nvmf/common.sh@521 -- # local subsystem config 00:20:35.996 20:52:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.997 20:52:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.997 { 00:20:35.997 "params": { 00:20:35.997 "name": "Nvme$subsystem", 00:20:35.997 "trtype": "$TEST_TRANSPORT", 00:20:35.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.997 "adrfam": "ipv4", 00:20:35.997 "trsvcid": "$NVMF_PORT", 00:20:35.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.997 "hdgst": ${hdgst:-false}, 00:20:35.997 "ddgst": ${ddgst:-false} 00:20:35.997 }, 00:20:35.997 "method": "bdev_nvme_attach_controller" 00:20:35.997 } 00:20:35.997 EOF 00:20:35.997 )") 00:20:35.997 20:52:00 -- nvmf/common.sh@543 -- # cat 00:20:35.997 20:52:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.997 20:52:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.997 { 00:20:35.997 "params": { 00:20:35.997 "name": "Nvme$subsystem", 00:20:35.997 "trtype": "$TEST_TRANSPORT", 00:20:35.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.997 "adrfam": "ipv4", 00:20:35.997 "trsvcid": "$NVMF_PORT", 00:20:35.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.997 "hdgst": ${hdgst:-false}, 00:20:35.997 "ddgst": ${ddgst:-false} 00:20:35.997 }, 00:20:35.997 "method": "bdev_nvme_attach_controller" 00:20:35.997 } 00:20:35.997 EOF 00:20:35.997 )") 00:20:35.997 20:52:00 -- nvmf/common.sh@543 -- # cat 00:20:35.997 20:52:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.997 20:52:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.997 { 00:20:35.997 "params": { 00:20:35.997 "name": "Nvme$subsystem", 00:20:35.997 "trtype": "$TEST_TRANSPORT", 00:20:35.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.997 "adrfam": "ipv4", 00:20:35.997 "trsvcid": "$NVMF_PORT", 00:20:35.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.997 "hdgst": ${hdgst:-false}, 00:20:35.997 "ddgst": ${ddgst:-false} 00:20:35.997 }, 00:20:35.997 "method": "bdev_nvme_attach_controller" 00:20:35.997 } 00:20:35.997 EOF 00:20:35.997 )") 00:20:35.997 20:52:00 -- nvmf/common.sh@543 -- # cat 00:20:35.997 20:52:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:35.997 20:52:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:35.997 { 00:20:35.997 "params": { 00:20:35.997 "name": "Nvme$subsystem", 00:20:35.997 "trtype": "$TEST_TRANSPORT", 00:20:35.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:35.997 "adrfam": "ipv4", 00:20:35.997 "trsvcid": "$NVMF_PORT", 00:20:35.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:35.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:35.997 "hdgst": ${hdgst:-false}, 00:20:35.997 "ddgst": ${ddgst:-false} 00:20:35.997 }, 00:20:35.997 "method": "bdev_nvme_attach_controller" 00:20:35.997 } 00:20:35.997 EOF 00:20:35.997 )") 00:20:35.997 20:52:00 -- nvmf/common.sh@543 -- # cat 00:20:36.258 20:52:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.258 20:52:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.258 { 00:20:36.258 "params": { 00:20:36.258 "name": "Nvme$subsystem", 00:20:36.258 "trtype": "$TEST_TRANSPORT", 00:20:36.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "$NVMF_PORT", 00:20:36.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.258 "hdgst": ${hdgst:-false}, 00:20:36.258 "ddgst": ${ddgst:-false} 00:20:36.258 }, 00:20:36.258 "method": "bdev_nvme_attach_controller" 00:20:36.258 } 00:20:36.258 EOF 00:20:36.258 )") 00:20:36.258 20:52:00 -- nvmf/common.sh@543 -- # cat 00:20:36.258 20:52:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.258 20:52:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.258 { 00:20:36.258 "params": { 00:20:36.258 "name": "Nvme$subsystem", 00:20:36.258 "trtype": "$TEST_TRANSPORT", 00:20:36.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "$NVMF_PORT", 00:20:36.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.258 "hdgst": ${hdgst:-false}, 00:20:36.258 "ddgst": ${ddgst:-false} 00:20:36.258 }, 00:20:36.258 "method": "bdev_nvme_attach_controller" 00:20:36.258 } 00:20:36.258 EOF 00:20:36.258 )") 00:20:36.258 20:52:00 -- nvmf/common.sh@543 -- # cat 00:20:36.258 [2024-04-24 20:52:00.648194] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:20:36.258 [2024-04-24 20:52:00.648244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2831992 ] 00:20:36.258 20:52:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.258 20:52:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.258 { 00:20:36.258 "params": { 00:20:36.258 "name": "Nvme$subsystem", 00:20:36.258 "trtype": "$TEST_TRANSPORT", 00:20:36.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "$NVMF_PORT", 00:20:36.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.258 "hdgst": ${hdgst:-false}, 00:20:36.258 "ddgst": ${ddgst:-false} 00:20:36.258 }, 00:20:36.258 "method": "bdev_nvme_attach_controller" 00:20:36.258 } 00:20:36.258 EOF 00:20:36.258 )") 00:20:36.258 20:52:00 -- nvmf/common.sh@543 -- # cat 00:20:36.258 20:52:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.258 20:52:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.258 { 00:20:36.258 "params": { 00:20:36.258 "name": "Nvme$subsystem", 00:20:36.258 "trtype": "$TEST_TRANSPORT", 00:20:36.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "$NVMF_PORT", 00:20:36.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.258 "hdgst": ${hdgst:-false}, 00:20:36.258 "ddgst": ${ddgst:-false} 00:20:36.258 }, 00:20:36.258 "method": "bdev_nvme_attach_controller" 00:20:36.258 } 00:20:36.258 EOF 00:20:36.258 )") 00:20:36.258 20:52:00 -- nvmf/common.sh@543 -- # cat 00:20:36.258 20:52:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.258 20:52:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.258 { 00:20:36.258 "params": { 00:20:36.258 "name": "Nvme$subsystem", 00:20:36.258 "trtype": "$TEST_TRANSPORT", 00:20:36.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "$NVMF_PORT", 00:20:36.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.258 "hdgst": ${hdgst:-false}, 00:20:36.258 "ddgst": ${ddgst:-false} 00:20:36.258 }, 00:20:36.258 "method": "bdev_nvme_attach_controller" 00:20:36.258 } 00:20:36.258 EOF 00:20:36.258 )") 00:20:36.258 20:52:00 -- nvmf/common.sh@543 -- # cat 00:20:36.258 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.258 20:52:00 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:36.258 20:52:00 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:36.258 { 00:20:36.258 "params": { 00:20:36.258 "name": "Nvme$subsystem", 00:20:36.258 "trtype": "$TEST_TRANSPORT", 00:20:36.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "$NVMF_PORT", 00:20:36.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.258 "hdgst": ${hdgst:-false}, 00:20:36.258 "ddgst": ${ddgst:-false} 00:20:36.258 }, 00:20:36.258 "method": "bdev_nvme_attach_controller" 00:20:36.258 } 00:20:36.258 EOF 00:20:36.258 )") 00:20:36.258 20:52:00 -- nvmf/common.sh@543 -- # cat 00:20:36.258 20:52:00 -- nvmf/common.sh@545 -- # jq . 00:20:36.258 20:52:00 -- nvmf/common.sh@546 -- # IFS=, 00:20:36.258 20:52:00 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:36.258 "params": { 00:20:36.258 "name": "Nvme1", 00:20:36.258 "trtype": "tcp", 00:20:36.258 "traddr": "10.0.0.2", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "4420", 00:20:36.258 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.258 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.258 "hdgst": false, 00:20:36.258 "ddgst": false 00:20:36.258 }, 00:20:36.258 "method": "bdev_nvme_attach_controller" 00:20:36.258 },{ 00:20:36.258 "params": { 00:20:36.258 "name": "Nvme2", 00:20:36.258 "trtype": "tcp", 00:20:36.258 "traddr": "10.0.0.2", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "4420", 00:20:36.258 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.258 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:36.258 "hdgst": false, 00:20:36.258 "ddgst": false 00:20:36.258 }, 00:20:36.258 "method": "bdev_nvme_attach_controller" 00:20:36.258 },{ 00:20:36.258 "params": { 00:20:36.258 "name": "Nvme3", 00:20:36.258 "trtype": "tcp", 00:20:36.258 "traddr": "10.0.0.2", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "4420", 00:20:36.258 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:36.258 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:36.258 "hdgst": false, 00:20:36.258 "ddgst": false 00:20:36.258 }, 00:20:36.258 "method": "bdev_nvme_attach_controller" 00:20:36.258 },{ 00:20:36.258 "params": { 00:20:36.258 "name": "Nvme4", 00:20:36.258 "trtype": "tcp", 00:20:36.258 "traddr": "10.0.0.2", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "4420", 00:20:36.258 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:36.258 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:36.258 "hdgst": false, 00:20:36.258 "ddgst": false 00:20:36.258 }, 00:20:36.258 "method": "bdev_nvme_attach_controller" 00:20:36.258 },{ 00:20:36.258 "params": { 00:20:36.258 "name": "Nvme5", 00:20:36.258 "trtype": "tcp", 00:20:36.258 "traddr": "10.0.0.2", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "4420", 00:20:36.258 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:36.258 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:36.258 "hdgst": false, 00:20:36.258 "ddgst": false 00:20:36.258 }, 00:20:36.258 "method": "bdev_nvme_attach_controller" 00:20:36.258 },{ 00:20:36.258 "params": { 00:20:36.258 "name": "Nvme6", 00:20:36.258 "trtype": "tcp", 00:20:36.258 "traddr": "10.0.0.2", 00:20:36.258 "adrfam": "ipv4", 00:20:36.258 "trsvcid": "4420", 00:20:36.258 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:36.259 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:36.259 "hdgst": false, 00:20:36.259 "ddgst": false 00:20:36.259 }, 00:20:36.259 "method": "bdev_nvme_attach_controller" 00:20:36.259 },{ 00:20:36.259 "params": { 00:20:36.259 "name": "Nvme7", 00:20:36.259 "trtype": "tcp", 00:20:36.259 "traddr": "10.0.0.2", 00:20:36.259 "adrfam": "ipv4", 00:20:36.259 "trsvcid": "4420", 00:20:36.259 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:36.259 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:36.259 "hdgst": false, 00:20:36.259 "ddgst": false 00:20:36.259 }, 00:20:36.259 "method": "bdev_nvme_attach_controller" 00:20:36.259 },{ 00:20:36.259 "params": { 00:20:36.259 "name": "Nvme8", 00:20:36.259 "trtype": "tcp", 00:20:36.259 "traddr": "10.0.0.2", 00:20:36.259 "adrfam": "ipv4", 00:20:36.259 "trsvcid": "4420", 00:20:36.259 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:36.259 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:36.259 "hdgst": false, 00:20:36.259 "ddgst": false 00:20:36.259 }, 00:20:36.259 "method": "bdev_nvme_attach_controller" 00:20:36.259 },{ 00:20:36.259 "params": { 00:20:36.259 "name": "Nvme9", 00:20:36.259 "trtype": "tcp", 00:20:36.259 "traddr": "10.0.0.2", 00:20:36.259 "adrfam": "ipv4", 00:20:36.259 "trsvcid": "4420", 00:20:36.259 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:36.259 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:36.259 "hdgst": false, 00:20:36.259 "ddgst": false 00:20:36.259 }, 00:20:36.259 "method": "bdev_nvme_attach_controller" 00:20:36.259 },{ 00:20:36.259 "params": { 00:20:36.259 "name": "Nvme10", 00:20:36.259 "trtype": "tcp", 00:20:36.259 "traddr": "10.0.0.2", 00:20:36.259 "adrfam": "ipv4", 00:20:36.259 "trsvcid": "4420", 00:20:36.259 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:36.259 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:36.259 "hdgst": false, 00:20:36.259 "ddgst": false 00:20:36.259 }, 00:20:36.259 "method": "bdev_nvme_attach_controller" 00:20:36.259 }' 00:20:36.259 [2024-04-24 20:52:00.726481] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.259 [2024-04-24 20:52:00.788987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.643 Running I/O for 10 seconds... 00:20:37.643 20:52:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:37.643 20:52:01 -- common/autotest_common.sh@850 -- # return 0 00:20:37.643 20:52:01 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:37.643 20:52:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.643 20:52:01 -- common/autotest_common.sh@10 -- # set +x 00:20:37.643 20:52:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.643 20:52:02 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:37.643 20:52:02 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:37.643 20:52:02 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:37.643 20:52:02 -- target/shutdown.sh@57 -- # local ret=1 00:20:37.643 20:52:02 -- target/shutdown.sh@58 -- # local i 00:20:37.643 20:52:02 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:37.643 20:52:02 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:37.643 20:52:02 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:37.643 20:52:02 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:37.643 20:52:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.643 20:52:02 -- common/autotest_common.sh@10 -- # set +x 00:20:37.643 20:52:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.643 20:52:02 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:37.643 20:52:02 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:37.643 20:52:02 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:37.903 20:52:02 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:37.903 20:52:02 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:37.903 20:52:02 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:37.903 20:52:02 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:37.903 20:52:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.903 20:52:02 -- common/autotest_common.sh@10 -- # set +x 00:20:37.903 20:52:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.903 20:52:02 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:37.903 20:52:02 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:37.903 20:52:02 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:38.164 20:52:02 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:38.164 20:52:02 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:38.164 20:52:02 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.164 20:52:02 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.164 20:52:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.164 20:52:02 -- common/autotest_common.sh@10 -- # set +x 00:20:38.164 20:52:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.164 20:52:02 -- target/shutdown.sh@60 -- # read_io_count=195 00:20:38.164 20:52:02 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:20:38.164 20:52:02 -- target/shutdown.sh@64 -- # ret=0 00:20:38.164 20:52:02 -- target/shutdown.sh@65 -- # break 00:20:38.164 20:52:02 -- target/shutdown.sh@69 -- # return 0 00:20:38.164 20:52:02 -- target/shutdown.sh@110 -- # killprocess 2831992 00:20:38.164 20:52:02 -- common/autotest_common.sh@936 -- # '[' -z 2831992 ']' 00:20:38.164 20:52:02 -- common/autotest_common.sh@940 -- # kill -0 2831992 00:20:38.164 20:52:02 -- common/autotest_common.sh@941 -- # uname 00:20:38.164 20:52:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:38.164 20:52:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2831992 00:20:38.424 20:52:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:38.424 20:52:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:38.424 20:52:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2831992' 00:20:38.424 killing process with pid 2831992 00:20:38.424 20:52:02 -- common/autotest_common.sh@955 -- # kill 2831992 00:20:38.424 20:52:02 -- common/autotest_common.sh@960 -- # wait 2831992 00:20:38.424 Received shutdown signal, test time was about 0.953436 seconds 00:20:38.424 00:20:38.424 Latency(us) 00:20:38.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.424 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.424 Verification LBA range: start 0x0 length 0x400 00:20:38.424 Nvme1n1 : 0.94 273.67 17.10 0.00 0.00 230965.12 17585.49 242920.11 00:20:38.424 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.424 Verification LBA range: start 0x0 length 0x400 00:20:38.424 Nvme2n1 : 0.95 270.77 16.92 0.00 0.00 228656.85 24903.68 255153.49 00:20:38.424 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.424 Verification LBA range: start 0x0 length 0x400 00:20:38.424 Nvme3n1 : 0.92 209.19 13.07 0.00 0.00 289243.59 22173.01 242920.11 00:20:38.424 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.425 Verification LBA range: start 0x0 length 0x400 00:20:38.425 Nvme4n1 : 0.94 272.26 17.02 0.00 0.00 216573.23 23265.28 219327.15 00:20:38.425 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.425 Verification LBA range: start 0x0 length 0x400 00:20:38.425 Nvme5n1 : 0.95 269.80 16.86 0.00 0.00 215012.69 28617.39 221948.59 00:20:38.425 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.425 Verification LBA range: start 0x0 length 0x400 00:20:38.425 Nvme6n1 : 0.95 268.75 16.80 0.00 0.00 211108.48 20971.52 239424.85 00:20:38.425 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.425 Verification LBA range: start 0x0 length 0x400 00:20:38.425 Nvme7n1 : 0.91 210.65 13.17 0.00 0.00 261275.31 16384.00 249910.61 00:20:38.425 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.425 Verification LBA range: start 0x0 length 0x400 00:20:38.425 Nvme8n1 : 0.93 206.42 12.90 0.00 0.00 261011.06 17148.59 253405.87 00:20:38.425 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.425 Verification LBA range: start 0x0 length 0x400 00:20:38.425 Nvme9n1 : 0.93 207.14 12.95 0.00 0.00 253496.89 32112.64 234181.97 00:20:38.425 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:38.425 Verification LBA range: start 0x0 length 0x400 00:20:38.425 Nvme10n1 : 0.94 203.98 12.75 0.00 0.00 251623.54 22500.69 274377.39 00:20:38.425 =================================================================================================================== 00:20:38.425 Total : 2392.63 149.54 0.00 0.00 238834.76 16384.00 274377.39 00:20:38.425 20:52:03 -- target/shutdown.sh@113 -- # sleep 1 00:20:39.820 20:52:04 -- target/shutdown.sh@114 -- # kill -0 2831616 00:20:39.820 20:52:04 -- target/shutdown.sh@116 -- # stoptarget 00:20:39.820 20:52:04 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:39.820 20:52:04 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:39.820 20:52:04 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:39.820 20:52:04 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:39.820 20:52:04 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:39.820 20:52:04 -- nvmf/common.sh@117 -- # sync 00:20:39.820 20:52:04 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.820 20:52:04 -- nvmf/common.sh@120 -- # set +e 00:20:39.820 20:52:04 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.820 20:52:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.820 rmmod nvme_tcp 00:20:39.820 rmmod nvme_fabrics 00:20:39.820 rmmod nvme_keyring 00:20:39.820 20:52:04 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.820 20:52:04 -- nvmf/common.sh@124 -- # set -e 00:20:39.820 20:52:04 -- nvmf/common.sh@125 -- # return 0 00:20:39.820 20:52:04 -- nvmf/common.sh@478 -- # '[' -n 2831616 ']' 00:20:39.820 20:52:04 -- nvmf/common.sh@479 -- # killprocess 2831616 00:20:39.820 20:52:04 -- common/autotest_common.sh@936 -- # '[' -z 2831616 ']' 00:20:39.820 20:52:04 -- common/autotest_common.sh@940 -- # kill -0 2831616 00:20:39.820 20:52:04 -- common/autotest_common.sh@941 -- # uname 00:20:39.820 20:52:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.820 20:52:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2831616 00:20:39.820 20:52:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:39.820 20:52:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:39.820 20:52:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2831616' 00:20:39.820 killing process with pid 2831616 00:20:39.820 20:52:04 -- common/autotest_common.sh@955 -- # kill 2831616 00:20:39.820 20:52:04 -- common/autotest_common.sh@960 -- # wait 2831616 00:20:39.820 20:52:04 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:39.820 20:52:04 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:39.820 20:52:04 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:39.820 20:52:04 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:39.820 20:52:04 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:39.820 20:52:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.820 20:52:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:39.820 20:52:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.365 20:52:06 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:42.365 00:20:42.365 real 0m7.765s 00:20:42.365 user 0m23.090s 00:20:42.365 sys 0m1.274s 00:20:42.365 20:52:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:42.365 20:52:06 -- common/autotest_common.sh@10 -- # set +x 00:20:42.365 ************************************ 00:20:42.365 END TEST nvmf_shutdown_tc2 00:20:42.365 ************************************ 00:20:42.365 20:52:06 -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:42.365 20:52:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:42.365 20:52:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:42.365 20:52:06 -- common/autotest_common.sh@10 -- # set +x 00:20:42.365 ************************************ 00:20:42.365 START TEST nvmf_shutdown_tc3 00:20:42.365 ************************************ 00:20:42.365 20:52:06 -- common/autotest_common.sh@1111 -- # nvmf_shutdown_tc3 00:20:42.365 20:52:06 -- target/shutdown.sh@121 -- # starttarget 00:20:42.365 20:52:06 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:42.365 20:52:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:42.365 20:52:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.365 20:52:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:42.365 20:52:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:42.365 20:52:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:42.365 20:52:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.365 20:52:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.365 20:52:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.365 20:52:06 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:42.365 20:52:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.365 20:52:06 -- common/autotest_common.sh@10 -- # set +x 00:20:42.365 20:52:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:42.365 20:52:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:42.365 20:52:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:42.365 20:52:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:42.365 20:52:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:42.365 20:52:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:42.365 20:52:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:42.365 20:52:06 -- nvmf/common.sh@295 -- # net_devs=() 00:20:42.365 20:52:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:42.365 20:52:06 -- nvmf/common.sh@296 -- # e810=() 00:20:42.365 20:52:06 -- nvmf/common.sh@296 -- # local -ga e810 00:20:42.365 20:52:06 -- nvmf/common.sh@297 -- # x722=() 00:20:42.365 20:52:06 -- nvmf/common.sh@297 -- # local -ga x722 00:20:42.365 20:52:06 -- nvmf/common.sh@298 -- # mlx=() 00:20:42.365 20:52:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:42.365 20:52:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.365 20:52:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.365 20:52:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.365 20:52:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.365 20:52:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.365 20:52:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.365 20:52:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.365 20:52:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.365 20:52:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.365 20:52:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.365 20:52:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.365 20:52:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:42.365 20:52:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:42.365 20:52:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:42.365 20:52:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.365 20:52:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:42.365 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:42.365 20:52:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.365 20:52:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:42.365 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:42.365 20:52:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:42.365 20:52:06 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.365 20:52:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.365 20:52:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:42.365 20:52:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.365 20:52:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:42.365 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:42.365 20:52:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.365 20:52:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.365 20:52:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.365 20:52:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:42.365 20:52:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.365 20:52:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:42.365 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:42.365 20:52:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.365 20:52:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:42.365 20:52:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:42.365 20:52:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:42.365 20:52:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:42.365 20:52:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.365 20:52:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.365 20:52:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.365 20:52:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:42.365 20:52:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.365 20:52:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.365 20:52:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:42.365 20:52:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.365 20:52:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.365 20:52:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:42.365 20:52:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:42.365 20:52:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.365 20:52:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.365 20:52:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.365 20:52:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.365 20:52:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:42.365 20:52:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.626 20:52:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.626 20:52:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.626 20:52:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:42.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:20:42.626 00:20:42.626 --- 10.0.0.2 ping statistics --- 00:20:42.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.626 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:20:42.626 20:52:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:20:42.626 00:20:42.626 --- 10.0.0.1 ping statistics --- 00:20:42.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.626 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:20:42.626 20:52:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.626 20:52:07 -- nvmf/common.sh@411 -- # return 0 00:20:42.626 20:52:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:42.626 20:52:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.626 20:52:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:42.626 20:52:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:42.626 20:52:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.626 20:52:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:42.626 20:52:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:42.626 20:52:07 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:42.626 20:52:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:42.626 20:52:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:42.626 20:52:07 -- common/autotest_common.sh@10 -- # set +x 00:20:42.626 20:52:07 -- nvmf/common.sh@470 -- # nvmfpid=2833380 00:20:42.626 20:52:07 -- nvmf/common.sh@471 -- # waitforlisten 2833380 00:20:42.626 20:52:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:42.626 20:52:07 -- common/autotest_common.sh@817 -- # '[' -z 2833380 ']' 00:20:42.626 20:52:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.626 20:52:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:42.626 20:52:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.626 20:52:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:42.626 20:52:07 -- common/autotest_common.sh@10 -- # set +x 00:20:42.626 [2024-04-24 20:52:07.152810] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:20:42.626 [2024-04-24 20:52:07.152871] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.626 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.626 [2024-04-24 20:52:07.225404] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.886 [2024-04-24 20:52:07.298060] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.886 [2024-04-24 20:52:07.298099] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.886 [2024-04-24 20:52:07.298108] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.886 [2024-04-24 20:52:07.298115] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.886 [2024-04-24 20:52:07.298121] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.886 [2024-04-24 20:52:07.298235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.886 [2024-04-24 20:52:07.298390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.886 [2024-04-24 20:52:07.298548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.886 [2024-04-24 20:52:07.298549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:43.457 20:52:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:43.457 20:52:08 -- common/autotest_common.sh@850 -- # return 0 00:20:43.457 20:52:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:43.457 20:52:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:43.457 20:52:08 -- common/autotest_common.sh@10 -- # set +x 00:20:43.457 20:52:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.457 20:52:08 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.457 20:52:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.457 20:52:08 -- common/autotest_common.sh@10 -- # set +x 00:20:43.457 [2024-04-24 20:52:08.068642] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.457 20:52:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.457 20:52:08 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:43.457 20:52:08 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:43.457 20:52:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:43.457 20:52:08 -- common/autotest_common.sh@10 -- # set +x 00:20:43.457 20:52:08 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:43.457 20:52:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.457 20:52:08 -- target/shutdown.sh@28 -- # cat 00:20:43.457 20:52:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.457 20:52:08 -- target/shutdown.sh@28 -- # cat 00:20:43.718 20:52:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.718 20:52:08 -- target/shutdown.sh@28 -- # cat 00:20:43.718 20:52:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.718 20:52:08 -- target/shutdown.sh@28 -- # cat 00:20:43.718 20:52:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.718 20:52:08 -- target/shutdown.sh@28 -- # cat 00:20:43.718 20:52:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.718 20:52:08 -- target/shutdown.sh@28 -- # cat 00:20:43.718 20:52:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.718 20:52:08 -- target/shutdown.sh@28 -- # cat 00:20:43.718 20:52:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.718 20:52:08 -- target/shutdown.sh@28 -- # cat 00:20:43.718 20:52:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.718 20:52:08 -- target/shutdown.sh@28 -- # cat 00:20:43.718 20:52:08 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:43.718 20:52:08 -- target/shutdown.sh@28 -- # cat 00:20:43.718 20:52:08 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:43.718 20:52:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:43.718 20:52:08 -- common/autotest_common.sh@10 -- # set +x 00:20:43.718 Malloc1 00:20:43.718 [2024-04-24 20:52:08.172137] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.718 Malloc2 00:20:43.718 Malloc3 00:20:43.718 Malloc4 00:20:43.718 Malloc5 00:20:43.718 Malloc6 00:20:43.978 Malloc7 00:20:43.978 Malloc8 00:20:43.978 Malloc9 00:20:43.978 Malloc10 00:20:43.978 20:52:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:43.978 20:52:08 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:43.978 20:52:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:43.978 20:52:08 -- common/autotest_common.sh@10 -- # set +x 00:20:43.978 20:52:08 -- target/shutdown.sh@125 -- # perfpid=2833636 00:20:43.978 20:52:08 -- target/shutdown.sh@126 -- # waitforlisten 2833636 /var/tmp/bdevperf.sock 00:20:43.978 20:52:08 -- common/autotest_common.sh@817 -- # '[' -z 2833636 ']' 00:20:43.978 20:52:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.978 20:52:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:43.978 20:52:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.978 20:52:08 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:43.978 20:52:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:43.979 20:52:08 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:43.979 20:52:08 -- common/autotest_common.sh@10 -- # set +x 00:20:43.979 20:52:08 -- nvmf/common.sh@521 -- # config=() 00:20:43.979 20:52:08 -- nvmf/common.sh@521 -- # local subsystem config 00:20:43.979 20:52:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:43.979 20:52:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:43.979 { 00:20:43.979 "params": { 00:20:43.979 "name": "Nvme$subsystem", 00:20:43.979 "trtype": "$TEST_TRANSPORT", 00:20:43.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.979 "adrfam": "ipv4", 00:20:43.979 "trsvcid": "$NVMF_PORT", 00:20:43.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.979 "hdgst": ${hdgst:-false}, 00:20:43.979 "ddgst": ${ddgst:-false} 00:20:43.979 }, 00:20:43.979 "method": "bdev_nvme_attach_controller" 00:20:43.979 } 00:20:43.979 EOF 00:20:43.979 )") 00:20:43.979 20:52:08 -- nvmf/common.sh@543 -- # cat 00:20:43.979 20:52:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:43.979 20:52:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:43.979 { 00:20:43.979 "params": { 00:20:43.979 "name": "Nvme$subsystem", 00:20:43.979 "trtype": "$TEST_TRANSPORT", 00:20:43.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.979 "adrfam": "ipv4", 00:20:43.979 "trsvcid": "$NVMF_PORT", 00:20:43.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.979 "hdgst": ${hdgst:-false}, 00:20:43.979 "ddgst": ${ddgst:-false} 00:20:43.979 }, 00:20:43.979 "method": "bdev_nvme_attach_controller" 00:20:43.979 } 00:20:43.979 EOF 00:20:43.979 )") 00:20:43.979 20:52:08 -- nvmf/common.sh@543 -- # cat 00:20:43.979 20:52:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:43.979 20:52:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:43.979 { 00:20:43.979 "params": { 00:20:43.979 "name": "Nvme$subsystem", 00:20:43.979 "trtype": "$TEST_TRANSPORT", 00:20:43.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.979 "adrfam": "ipv4", 00:20:43.979 "trsvcid": "$NVMF_PORT", 00:20:43.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.979 "hdgst": ${hdgst:-false}, 00:20:43.979 "ddgst": ${ddgst:-false} 00:20:43.979 }, 00:20:43.979 "method": "bdev_nvme_attach_controller" 00:20:43.979 } 00:20:43.979 EOF 00:20:43.979 )") 00:20:43.979 20:52:08 -- nvmf/common.sh@543 -- # cat 00:20:43.979 20:52:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:43.979 20:52:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:43.979 { 00:20:43.979 "params": { 00:20:43.979 "name": "Nvme$subsystem", 00:20:43.979 "trtype": "$TEST_TRANSPORT", 00:20:43.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.979 "adrfam": "ipv4", 00:20:43.979 "trsvcid": "$NVMF_PORT", 00:20:43.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.979 "hdgst": ${hdgst:-false}, 00:20:43.979 "ddgst": ${ddgst:-false} 00:20:43.979 }, 00:20:43.979 "method": "bdev_nvme_attach_controller" 00:20:43.979 } 00:20:43.979 EOF 00:20:43.979 )") 00:20:43.979 20:52:08 -- nvmf/common.sh@543 -- # cat 00:20:43.979 20:52:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:43.979 20:52:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:43.979 { 00:20:43.979 "params": { 00:20:43.979 "name": "Nvme$subsystem", 00:20:43.979 "trtype": "$TEST_TRANSPORT", 00:20:43.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:43.979 "adrfam": "ipv4", 00:20:43.979 "trsvcid": "$NVMF_PORT", 00:20:43.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:43.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:43.979 "hdgst": ${hdgst:-false}, 00:20:43.979 "ddgst": ${ddgst:-false} 00:20:43.979 }, 00:20:43.979 "method": "bdev_nvme_attach_controller" 00:20:43.979 } 00:20:43.979 EOF 00:20:43.979 )") 00:20:43.979 20:52:08 -- nvmf/common.sh@543 -- # cat 00:20:44.240 20:52:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.240 20:52:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.240 { 00:20:44.240 "params": { 00:20:44.240 "name": "Nvme$subsystem", 00:20:44.240 "trtype": "$TEST_TRANSPORT", 00:20:44.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.240 "adrfam": "ipv4", 00:20:44.240 "trsvcid": "$NVMF_PORT", 00:20:44.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.240 "hdgst": ${hdgst:-false}, 00:20:44.240 "ddgst": ${ddgst:-false} 00:20:44.240 }, 00:20:44.240 "method": "bdev_nvme_attach_controller" 00:20:44.240 } 00:20:44.240 EOF 00:20:44.240 )") 00:20:44.240 20:52:08 -- nvmf/common.sh@543 -- # cat 00:20:44.240 [2024-04-24 20:52:08.626916] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:20:44.240 [2024-04-24 20:52:08.626969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2833636 ] 00:20:44.240 20:52:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.240 20:52:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.240 { 00:20:44.240 "params": { 00:20:44.240 "name": "Nvme$subsystem", 00:20:44.240 "trtype": "$TEST_TRANSPORT", 00:20:44.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.240 "adrfam": "ipv4", 00:20:44.240 "trsvcid": "$NVMF_PORT", 00:20:44.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.240 "hdgst": ${hdgst:-false}, 00:20:44.240 "ddgst": ${ddgst:-false} 00:20:44.240 }, 00:20:44.240 "method": "bdev_nvme_attach_controller" 00:20:44.240 } 00:20:44.240 EOF 00:20:44.240 )") 00:20:44.240 20:52:08 -- nvmf/common.sh@543 -- # cat 00:20:44.240 20:52:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.240 20:52:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.240 { 00:20:44.240 "params": { 00:20:44.240 "name": "Nvme$subsystem", 00:20:44.240 "trtype": "$TEST_TRANSPORT", 00:20:44.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.240 "adrfam": "ipv4", 00:20:44.240 "trsvcid": "$NVMF_PORT", 00:20:44.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.240 "hdgst": ${hdgst:-false}, 00:20:44.240 "ddgst": ${ddgst:-false} 00:20:44.240 }, 00:20:44.240 "method": "bdev_nvme_attach_controller" 00:20:44.240 } 00:20:44.240 EOF 00:20:44.240 )") 00:20:44.240 20:52:08 -- nvmf/common.sh@543 -- # cat 00:20:44.240 20:52:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.240 20:52:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.240 { 00:20:44.240 "params": { 00:20:44.240 "name": "Nvme$subsystem", 00:20:44.240 "trtype": "$TEST_TRANSPORT", 00:20:44.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.240 "adrfam": "ipv4", 00:20:44.240 "trsvcid": "$NVMF_PORT", 00:20:44.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.240 "hdgst": ${hdgst:-false}, 00:20:44.240 "ddgst": ${ddgst:-false} 00:20:44.240 }, 00:20:44.240 "method": "bdev_nvme_attach_controller" 00:20:44.240 } 00:20:44.240 EOF 00:20:44.240 )") 00:20:44.240 20:52:08 -- nvmf/common.sh@543 -- # cat 00:20:44.240 20:52:08 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:44.240 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.241 20:52:08 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:44.241 { 00:20:44.241 "params": { 00:20:44.241 "name": "Nvme$subsystem", 00:20:44.241 "trtype": "$TEST_TRANSPORT", 00:20:44.241 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.241 "adrfam": "ipv4", 00:20:44.241 "trsvcid": "$NVMF_PORT", 00:20:44.241 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.241 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.241 "hdgst": ${hdgst:-false}, 00:20:44.241 "ddgst": ${ddgst:-false} 00:20:44.241 }, 00:20:44.241 "method": "bdev_nvme_attach_controller" 00:20:44.241 } 00:20:44.241 EOF 00:20:44.241 )") 00:20:44.241 20:52:08 -- nvmf/common.sh@543 -- # cat 00:20:44.241 20:52:08 -- nvmf/common.sh@545 -- # jq . 00:20:44.241 20:52:08 -- nvmf/common.sh@546 -- # IFS=, 00:20:44.241 20:52:08 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:44.241 "params": { 00:20:44.241 "name": "Nvme1", 00:20:44.241 "trtype": "tcp", 00:20:44.241 "traddr": "10.0.0.2", 00:20:44.241 "adrfam": "ipv4", 00:20:44.241 "trsvcid": "4420", 00:20:44.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.241 "hdgst": false, 00:20:44.241 "ddgst": false 00:20:44.241 }, 00:20:44.241 "method": "bdev_nvme_attach_controller" 00:20:44.241 },{ 00:20:44.241 "params": { 00:20:44.241 "name": "Nvme2", 00:20:44.241 "trtype": "tcp", 00:20:44.241 "traddr": "10.0.0.2", 00:20:44.241 "adrfam": "ipv4", 00:20:44.241 "trsvcid": "4420", 00:20:44.241 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:44.241 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:44.241 "hdgst": false, 00:20:44.241 "ddgst": false 00:20:44.241 }, 00:20:44.241 "method": "bdev_nvme_attach_controller" 00:20:44.241 },{ 00:20:44.241 "params": { 00:20:44.241 "name": "Nvme3", 00:20:44.241 "trtype": "tcp", 00:20:44.241 "traddr": "10.0.0.2", 00:20:44.241 "adrfam": "ipv4", 00:20:44.241 "trsvcid": "4420", 00:20:44.241 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:44.241 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:44.241 "hdgst": false, 00:20:44.241 "ddgst": false 00:20:44.241 }, 00:20:44.241 "method": "bdev_nvme_attach_controller" 00:20:44.241 },{ 00:20:44.241 "params": { 00:20:44.241 "name": "Nvme4", 00:20:44.241 "trtype": "tcp", 00:20:44.241 "traddr": "10.0.0.2", 00:20:44.241 "adrfam": "ipv4", 00:20:44.241 "trsvcid": "4420", 00:20:44.241 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:44.241 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:44.241 "hdgst": false, 00:20:44.241 "ddgst": false 00:20:44.241 }, 00:20:44.241 "method": "bdev_nvme_attach_controller" 00:20:44.241 },{ 00:20:44.241 "params": { 00:20:44.241 "name": "Nvme5", 00:20:44.241 "trtype": "tcp", 00:20:44.241 "traddr": "10.0.0.2", 00:20:44.241 "adrfam": "ipv4", 00:20:44.241 "trsvcid": "4420", 00:20:44.241 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:44.241 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:44.241 "hdgst": false, 00:20:44.241 "ddgst": false 00:20:44.241 }, 00:20:44.241 "method": "bdev_nvme_attach_controller" 00:20:44.241 },{ 00:20:44.241 "params": { 00:20:44.241 "name": "Nvme6", 00:20:44.241 "trtype": "tcp", 00:20:44.241 "traddr": "10.0.0.2", 00:20:44.241 "adrfam": "ipv4", 00:20:44.241 "trsvcid": "4420", 00:20:44.241 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:44.241 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:44.241 "hdgst": false, 00:20:44.241 "ddgst": false 00:20:44.241 }, 00:20:44.241 "method": "bdev_nvme_attach_controller" 00:20:44.241 },{ 00:20:44.241 "params": { 00:20:44.241 "name": "Nvme7", 00:20:44.241 "trtype": "tcp", 00:20:44.241 "traddr": "10.0.0.2", 00:20:44.241 "adrfam": "ipv4", 00:20:44.241 "trsvcid": "4420", 00:20:44.241 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:44.241 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:44.241 "hdgst": false, 00:20:44.241 "ddgst": false 00:20:44.241 }, 00:20:44.241 "method": "bdev_nvme_attach_controller" 00:20:44.241 },{ 00:20:44.241 "params": { 00:20:44.241 "name": "Nvme8", 00:20:44.241 "trtype": "tcp", 00:20:44.241 "traddr": "10.0.0.2", 00:20:44.241 "adrfam": "ipv4", 00:20:44.241 "trsvcid": "4420", 00:20:44.241 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:44.241 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:44.241 "hdgst": false, 00:20:44.241 "ddgst": false 00:20:44.241 }, 00:20:44.241 "method": "bdev_nvme_attach_controller" 00:20:44.241 },{ 00:20:44.241 "params": { 00:20:44.241 "name": "Nvme9", 00:20:44.241 "trtype": "tcp", 00:20:44.241 "traddr": "10.0.0.2", 00:20:44.241 "adrfam": "ipv4", 00:20:44.241 "trsvcid": "4420", 00:20:44.241 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:44.241 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:44.241 "hdgst": false, 00:20:44.241 "ddgst": false 00:20:44.241 }, 00:20:44.241 "method": "bdev_nvme_attach_controller" 00:20:44.241 },{ 00:20:44.241 "params": { 00:20:44.241 "name": "Nvme10", 00:20:44.241 "trtype": "tcp", 00:20:44.241 "traddr": "10.0.0.2", 00:20:44.241 "adrfam": "ipv4", 00:20:44.241 "trsvcid": "4420", 00:20:44.241 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:44.241 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:44.241 "hdgst": false, 00:20:44.241 "ddgst": false 00:20:44.241 }, 00:20:44.241 "method": "bdev_nvme_attach_controller" 00:20:44.241 }' 00:20:44.241 [2024-04-24 20:52:08.703299] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.241 [2024-04-24 20:52:08.766217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:46.192 Running I/O for 10 seconds... 00:20:46.192 20:52:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:46.192 20:52:10 -- common/autotest_common.sh@850 -- # return 0 00:20:46.192 20:52:10 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:46.192 20:52:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.192 20:52:10 -- common/autotest_common.sh@10 -- # set +x 00:20:46.192 20:52:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.192 20:52:10 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.192 20:52:10 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:46.192 20:52:10 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:46.192 20:52:10 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:46.192 20:52:10 -- target/shutdown.sh@57 -- # local ret=1 00:20:46.192 20:52:10 -- target/shutdown.sh@58 -- # local i 00:20:46.192 20:52:10 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:46.192 20:52:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:46.192 20:52:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:46.192 20:52:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:46.192 20:52:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.192 20:52:10 -- common/autotest_common.sh@10 -- # set +x 00:20:46.192 20:52:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.192 20:52:10 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:46.192 20:52:10 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:46.192 20:52:10 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:46.453 20:52:10 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:46.453 20:52:10 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:46.453 20:52:10 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:46.453 20:52:10 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:46.453 20:52:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.453 20:52:10 -- common/autotest_common.sh@10 -- # set +x 00:20:46.453 20:52:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.453 20:52:11 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:46.453 20:52:11 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:46.453 20:52:11 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:46.713 20:52:11 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:46.713 20:52:11 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:46.713 20:52:11 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:46.713 20:52:11 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:46.713 20:52:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:46.713 20:52:11 -- common/autotest_common.sh@10 -- # set +x 00:20:46.713 20:52:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:46.713 20:52:11 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:46.713 20:52:11 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:46.713 20:52:11 -- target/shutdown.sh@64 -- # ret=0 00:20:46.713 20:52:11 -- target/shutdown.sh@65 -- # break 00:20:46.713 20:52:11 -- target/shutdown.sh@69 -- # return 0 00:20:46.713 20:52:11 -- target/shutdown.sh@135 -- # killprocess 2833380 00:20:46.713 20:52:11 -- common/autotest_common.sh@936 -- # '[' -z 2833380 ']' 00:20:46.713 20:52:11 -- common/autotest_common.sh@940 -- # kill -0 2833380 00:20:46.713 20:52:11 -- common/autotest_common.sh@941 -- # uname 00:20:46.713 20:52:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:46.713 20:52:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2833380 00:20:46.989 20:52:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:46.989 20:52:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:46.989 20:52:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2833380' 00:20:46.989 killing process with pid 2833380 00:20:46.989 20:52:11 -- common/autotest_common.sh@955 -- # kill 2833380 00:20:46.989 20:52:11 -- common/autotest_common.sh@960 -- # wait 2833380 00:20:46.989 [2024-04-24 20:52:11.402578] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402665] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402670] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402680] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402689] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402694] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402707] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402712] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402721] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402731] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402736] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402745] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402754] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402758] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402763] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402772] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402777] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402782] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402787] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402804] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402809] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402813] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402818] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402822] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402827] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402842] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402846] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402855] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402869] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402874] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402878] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402892] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402897] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402901] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402910] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402915] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402920] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402929] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402934] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402943] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.402947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de1e0 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404138] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404198] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404211] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404225] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404266] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.989 [2024-04-24 20:52:11.404279] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404286] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404293] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404300] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404313] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404319] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404326] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404332] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404340] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404346] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404353] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404396] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404403] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404425] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404459] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404472] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404518] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404562] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404615] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404627] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.404635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0b10 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406129] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406183] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406230] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406236] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406243] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406263] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406269] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406276] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406282] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406303] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406310] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406328] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406349] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.990 [2024-04-24 20:52:11.406355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406361] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406419] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406426] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406432] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406447] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406467] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406508] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406515] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406521] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406534] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.406559] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de670 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.407018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.991 [2024-04-24 20:52:11.407055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.407067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.991 [2024-04-24 20:52:11.407075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.407083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.991 [2024-04-24 20:52:11.407091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.407099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.991 [2024-04-24 20:52:11.407106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.407114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420310 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.407156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.991 [2024-04-24 20:52:11.407165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.407178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.991 [2024-04-24 20:52:11.407186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.407194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.991 [2024-04-24 20:52:11.407201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.407208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.991 [2024-04-24 20:52:11.407216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.407223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29ebcd0 is same with the state(5) to be set 00:20:46.991 [2024-04-24 20:52:11.408470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.991 [2024-04-24 20:52:11.408764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.991 [2024-04-24 20:52:11.408771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.408787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.408804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.408820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.408836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.408854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.408870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.408887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.408903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.408920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.408936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.408936] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.408952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.408966] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.408970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.408974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.408980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128[2024-04-24 20:52:11.408981] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.408990] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with [2024-04-24 20:52:11.408990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:46.992 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.409000] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.409008] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.409015] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128[2024-04-24 20:52:11.409022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409033] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with [2024-04-24 20:52:11.409033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:46.992 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.409041] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.409048] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.409055] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with [2024-04-24 20:52:11.409062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12the state(5) to be set 00:20:46.992 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.409072] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.409079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.409086] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.409094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:12[2024-04-24 20:52:11.409101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 20:52:11.409110] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.409126] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with [2024-04-24 20:52:11.409129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:46.992 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.409140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.409147] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.409154] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:12[2024-04-24 20:52:11.409160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 20:52:11.409169] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409181] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.409188] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.409195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.992 [2024-04-24 20:52:11.409202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.992 [2024-04-24 20:52:11.409207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.992 [2024-04-24 20:52:11.409209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409216] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with [2024-04-24 20:52:11.409216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:12the state(5) to be set 00:20:46.993 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409224] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409238] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409248] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:12[2024-04-24 20:52:11.409255] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 20:52:11.409264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409274] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409295] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409337] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:12[2024-04-24 20:52:11.409364] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409373] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with [2024-04-24 20:52:11.409373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:46.993 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409384] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:12[2024-04-24 20:52:11.409404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 20:52:11.409413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409424] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409439] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23defb0 is same with the state(5) to be set 00:20:46.993 [2024-04-24 20:52:11.409450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.993 [2024-04-24 20:52:11.409515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.993 [2024-04-24 20:52:11.409524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-24 20:52:11.409531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-24 20:52:11.409540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-24 20:52:11.409547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-24 20:52:11.409556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-24 20:52:11.409563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-24 20:52:11.409572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-24 20:52:11.409579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-24 20:52:11.409588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-24 20:52:11.409595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-24 20:52:11.409604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.994 [2024-04-24 20:52:11.409611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.994 [2024-04-24 20:52:11.409637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.994 [2024-04-24 20:52:11.409680] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2994250 was disconnected and freed. reset controller. 00:20:46.994 [2024-04-24 20:52:11.410880] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410904] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410911] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410918] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410925] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410935] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410941] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410947] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410960] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410974] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410980] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410987] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.410999] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411005] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411012] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411025] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411031] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411038] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411050] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411057] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411063] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411070] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411083] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411102] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411111] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411118] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411125] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411131] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411137] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411149] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411156] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411163] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411176] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411189] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411195] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411201] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411208] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411215] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411221] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411227] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411233] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411240] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411252] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411258] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411264] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411271] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411291] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.411297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df8d0 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.412251] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.412280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.412288] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.412294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.412301] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.412308] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.412315] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.994 [2024-04-24 20:52:11.412321] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412329] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412335] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412360] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412367] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412374] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412387] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412400] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412420] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412445] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412452] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412465] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412478] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412485] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412511] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412517] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412537] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412550] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412556] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412569] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412601] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412608] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412622] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412635] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412648] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412660] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412666] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412673] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412679] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.412685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dfd60 is same with the state(5) to be set 00:20:46.995 [2024-04-24 20:52:11.413158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.995 [2024-04-24 20:52:11.413411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.995 [2024-04-24 20:52:11.413418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413633] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413651] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413656] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413662] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413667] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413676] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413681] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413686] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 20:52:11.413691] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413698] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413709] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413714] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413719] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413729] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413740] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413744] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413750] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with [2024-04-24 20:52:11.413749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:46.996 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413756] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413761] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with [2024-04-24 20:52:11.413760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:12the state(5) to be set 00:20:46.996 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413770] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413785] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413791] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413797] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 [2024-04-24 20:52:11.413801] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413806] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with [2024-04-24 20:52:11.413806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:46.996 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413814] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:12[2024-04-24 20:52:11.413819] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.996 the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413825] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.996 [2024-04-24 20:52:11.413827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.996 [2024-04-24 20:52:11.413831] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413835] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.413845] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 20:52:11.413850] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413856] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:12[2024-04-24 20:52:11.413860] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.413877] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413883] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.413888] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.413893] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.413903] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.413914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413919] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.413924] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.413933] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with [2024-04-24 20:52:11.413938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:12the state(5) to be set 00:20:46.997 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.413945] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.413950] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413956] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.413961] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-24 20:52:11.413967] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413973] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:12[2024-04-24 20:52:11.413978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413984] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e0680 is same with the state(5) to be set 00:20:46.997 [2024-04-24 20:52:11.413985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.413995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.997 [2024-04-24 20:52:11.414276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.997 [2024-04-24 20:52:11.414283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.414293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.414300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.414346] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2996c40 was disconnected and freed. reset controller. 00:20:46.998 [2024-04-24 20:52:11.414532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:46.998 [2024-04-24 20:52:11.414578] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2858440 (9): Bad file descriptor 00:20:46.998 [2024-04-24 20:52:11.414643] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.998 [2024-04-24 20:52:11.416088] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.998 [2024-04-24 20:52:11.416110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:46.998 [2024-04-24 20:52:11.416145] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28801f0 (9): Bad file descriptor 00:20:46.998 [2024-04-24 20:52:11.416175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.998 [2024-04-24 20:52:11.416676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.998 [2024-04-24 20:52:11.416683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.416693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.416700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.416709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.416715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.416729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.416737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.416746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.416753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.416784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.416816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.416856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.416892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.416931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.416965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.417005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.417046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.417091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.417130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.417168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.417203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.417244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.430764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.430819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.430829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.430839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.430848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.430858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.430865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.430874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.430882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.430891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.430898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.430908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.430917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.430926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.430934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.430950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.430958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.430967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.430974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.430985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.430993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.431010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.431027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.431044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.431061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.431078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.431095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.431110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.431127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.431144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.431162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.431180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.999 [2024-04-24 20:52:11.431197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.431207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2915040 is same with the state(5) to be set 00:20:46.999 [2024-04-24 20:52:11.431293] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2915040 was disconnected and freed. reset controller. 00:20:46.999 [2024-04-24 20:52:11.431368] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.999 [2024-04-24 20:52:11.432981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.999 [2024-04-24 20:52:11.433212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.999 [2024-04-24 20:52:11.433225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2858440 with addr=10.0.0.2, port=4420 00:20:46.999 [2024-04-24 20:52:11.433236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2858440 is same with the state(5) to be set 00:20:46.999 [2024-04-24 20:52:11.433271] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2420310 (9): Bad file descriptor 00:20:46.999 [2024-04-24 20:52:11.433311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.999 [2024-04-24 20:52:11.433323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.433333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.999 [2024-04-24 20:52:11.433341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.999 [2024-04-24 20:52:11.433350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.999 [2024-04-24 20:52:11.433357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29f73b0 is same with the state(5) to be set 00:20:47.000 [2024-04-24 20:52:11.433407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28804f0 is same with the state(5) to be set 00:20:47.000 [2024-04-24 20:52:11.433494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29ebcd0 (9): Bad file descriptor 00:20:47.000 [2024-04-24 20:52:11.433522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a17810 is same with the state(5) to be set 00:20:47.000 [2024-04-24 20:52:11.433611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28789d0 is same with the state(5) to be set 00:20:47.000 [2024-04-24 20:52:11.433700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2884a60 is same with the state(5) to be set 00:20:47.000 [2024-04-24 20:52:11.433798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:47.000 [2024-04-24 20:52:11.433855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.433862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x287fa50 is same with the state(5) to be set 00:20:47.000 [2024-04-24 20:52:11.433890] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2858440 (9): Bad file descriptor 00:20:47.000 [2024-04-24 20:52:11.435683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-24 20:52:11.435705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.435719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-24 20:52:11.435735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.435745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-24 20:52:11.435753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.435763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-24 20:52:11.435770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.435780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-24 20:52:11.435791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.435802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-24 20:52:11.435810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.435820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-24 20:52:11.435828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.435837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.000 [2024-04-24 20:52:11.435845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.000 [2024-04-24 20:52:11.435853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2850540 is same with the state(5) to be set 00:20:47.000 [2024-04-24 20:52:11.435897] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2850540 was disconnected and freed. reset controller. 00:20:47.000 [2024-04-24 20:52:11.435967] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:47.000 [2024-04-24 20:52:11.436323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-24 20:52:11.436586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-24 20:52:11.436599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28801f0 with addr=10.0.0.2, port=4420 00:20:47.000 [2024-04-24 20:52:11.436608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28801f0 is same with the state(5) to be set 00:20:47.000 [2024-04-24 20:52:11.436682] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:47.000 [2024-04-24 20:52:11.436748] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:47.000 [2024-04-24 20:52:11.436789] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:47.000 [2024-04-24 20:52:11.437883] nvme_tcp.c:1215:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:47.000 [2024-04-24 20:52:11.437908] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:47.000 [2024-04-24 20:52:11.437924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28804f0 (9): Bad file descriptor 00:20:47.000 [2024-04-24 20:52:11.438226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-24 20:52:11.438554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.000 [2024-04-24 20:52:11.438566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2420310 with addr=10.0.0.2, port=4420 00:20:47.000 [2024-04-24 20:52:11.438574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420310 is same with the state(5) to be set 00:20:47.000 [2024-04-24 20:52:11.438584] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28801f0 (9): Bad file descriptor 00:20:47.001 [2024-04-24 20:52:11.438594] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:47.001 [2024-04-24 20:52:11.438602] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:47.001 [2024-04-24 20:52:11.438611] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:47.001 [2024-04-24 20:52:11.439001] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.001 [2024-04-24 20:52:11.439025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2420310 (9): Bad file descriptor 00:20:47.001 [2024-04-24 20:52:11.439039] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:47.001 [2024-04-24 20:52:11.439047] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:47.001 [2024-04-24 20:52:11.439055] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:47.001 [2024-04-24 20:52:11.439396] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.001 [2024-04-24 20:52:11.439575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.001 [2024-04-24 20:52:11.439789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.001 [2024-04-24 20:52:11.439801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28804f0 with addr=10.0.0.2, port=4420 00:20:47.001 [2024-04-24 20:52:11.439810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28804f0 is same with the state(5) to be set 00:20:47.001 [2024-04-24 20:52:11.439818] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:47.001 [2024-04-24 20:52:11.439825] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:47.001 [2024-04-24 20:52:11.439832] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:47.001 [2024-04-24 20:52:11.439891] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.001 [2024-04-24 20:52:11.439900] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28804f0 (9): Bad file descriptor 00:20:47.001 [2024-04-24 20:52:11.439945] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:47.001 [2024-04-24 20:52:11.439953] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:47.001 [2024-04-24 20:52:11.439960] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:47.001 [2024-04-24 20:52:11.440002] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.001 [2024-04-24 20:52:11.442529] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29f73b0 (9): Bad file descriptor 00:20:47.001 [2024-04-24 20:52:11.442556] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a17810 (9): Bad file descriptor 00:20:47.001 [2024-04-24 20:52:11.442576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28789d0 (9): Bad file descriptor 00:20:47.001 [2024-04-24 20:52:11.442593] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2884a60 (9): Bad file descriptor 00:20:47.001 [2024-04-24 20:52:11.442610] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x287fa50 (9): Bad file descriptor 00:20:47.001 [2024-04-24 20:52:11.442751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.442777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.442797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.442816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.442839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.442858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.442877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.442895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.442913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.442932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.442951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.442969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.442988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.442996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.001 [2024-04-24 20:52:11.443279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.001 [2024-04-24 20:52:11.443290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.002 [2024-04-24 20:52:11.443945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.002 [2024-04-24 20:52:11.443955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.443964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.443975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.443983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.443993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x290d760 is same with the state(5) to be set 00:20:47.003 [2024-04-24 20:52:11.445680] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:47.003 [2024-04-24 20:52:11.445706] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:47.003 [2024-04-24 20:52:11.446163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-24 20:52:11.446509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-24 20:52:11.446523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2858440 with addr=10.0.0.2, port=4420 00:20:47.003 [2024-04-24 20:52:11.446531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2858440 is same with the state(5) to be set 00:20:47.003 [2024-04-24 20:52:11.446858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-24 20:52:11.447234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-24 20:52:11.447245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29ebcd0 with addr=10.0.0.2, port=4420 00:20:47.003 [2024-04-24 20:52:11.447252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29ebcd0 is same with the state(5) to be set 00:20:47.003 [2024-04-24 20:52:11.447541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:47.003 [2024-04-24 20:52:11.447559] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2858440 (9): Bad file descriptor 00:20:47.003 [2024-04-24 20:52:11.447569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29ebcd0 (9): Bad file descriptor 00:20:47.003 [2024-04-24 20:52:11.447933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-24 20:52:11.448252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-24 20:52:11.448262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28801f0 with addr=10.0.0.2, port=4420 00:20:47.003 [2024-04-24 20:52:11.448269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28801f0 is same with the state(5) to be set 00:20:47.003 [2024-04-24 20:52:11.448276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:47.003 [2024-04-24 20:52:11.448283] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:47.003 [2024-04-24 20:52:11.448290] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:47.003 [2024-04-24 20:52:11.448302] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:47.003 [2024-04-24 20:52:11.448308] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:47.003 [2024-04-24 20:52:11.448314] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:47.003 [2024-04-24 20:52:11.448355] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:47.003 [2024-04-24 20:52:11.448365] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-24 20:52:11.448371] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-24 20:52:11.448385] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28801f0 (9): Bad file descriptor 00:20:47.003 [2024-04-24 20:52:11.448758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-24 20:52:11.449032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-24 20:52:11.449042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2420310 with addr=10.0.0.2, port=4420 00:20:47.003 [2024-04-24 20:52:11.449049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420310 is same with the state(5) to be set 00:20:47.003 [2024-04-24 20:52:11.449057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:47.003 [2024-04-24 20:52:11.449065] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:47.003 [2024-04-24 20:52:11.449072] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:47.003 [2024-04-24 20:52:11.449106] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-24 20:52:11.449118] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2420310 (9): Bad file descriptor 00:20:47.003 [2024-04-24 20:52:11.449159] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:47.003 [2024-04-24 20:52:11.449167] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:47.003 [2024-04-24 20:52:11.449174] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:47.003 [2024-04-24 20:52:11.449208] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:47.003 [2024-04-24 20:52:11.449217] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-24 20:52:11.449590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-24 20:52:11.449914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.003 [2024-04-24 20:52:11.449925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28804f0 with addr=10.0.0.2, port=4420 00:20:47.003 [2024-04-24 20:52:11.449932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28804f0 is same with the state(5) to be set 00:20:47.003 [2024-04-24 20:52:11.449964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28804f0 (9): Bad file descriptor 00:20:47.003 [2024-04-24 20:52:11.449996] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:47.003 [2024-04-24 20:52:11.450002] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:47.003 [2024-04-24 20:52:11.450009] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:47.003 [2024-04-24 20:52:11.450043] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.003 [2024-04-24 20:52:11.452650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.003 [2024-04-24 20:52:11.452924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.003 [2024-04-24 20:52:11.452931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.452940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.452947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.452956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.452963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.452974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.452981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.452990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.452997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.004 [2024-04-24 20:52:11.453590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.004 [2024-04-24 20:52:11.453598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.453608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.453616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.453626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.453633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.453642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.453649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.453658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.453666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.453675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.453683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.453691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.453699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.453708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.453715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.453724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29162b0 is same with the state(5) to be set 00:20:47.005 [2024-04-24 20:52:11.454990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.005 [2024-04-24 20:52:11.455561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.005 [2024-04-24 20:52:11.455570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.455990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.455999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.456007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.456016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.456023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.456033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.456040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.456050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.456057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.456066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.456074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.456083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.456091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.456101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.456109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.456117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2995790 is same with the state(5) to be set 00:20:47.006 [2024-04-24 20:52:11.457385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.457398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.457409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.457417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.457427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.006 [2024-04-24 20:52:11.457437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.006 [2024-04-24 20:52:11.457447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.457989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.457999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.458006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.458015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.458023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.458033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.458041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.458050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.458057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.458067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.458074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.458083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.007 [2024-04-24 20:52:11.458092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.007 [2024-04-24 20:52:11.458102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.458484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.458494] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x284dbe0 is same with the state(5) to be set 00:20:47.008 [2024-04-24 20:52:11.459764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.459793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.459813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.459833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.459853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.459870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.459887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.459905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.459921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.459938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.459955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.459972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.459989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.459996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.460005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.460014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.460023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.460031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.460040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.460048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.460057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.008 [2024-04-24 20:52:11.460066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.008 [2024-04-24 20:52:11.460076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.009 [2024-04-24 20:52:11.460719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.009 [2024-04-24 20:52:11.460732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.460739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.460749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.460756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.460765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.460773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.460781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.460789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.460798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.460805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.460814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.460821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.460831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.460839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.460848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.460857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.460865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x284f090 is same with the state(5) to be set 00:20:47.010 [2024-04-24 20:52:11.462125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.010 [2024-04-24 20:52:11.462668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.010 [2024-04-24 20:52:11.462676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.462983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.462992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:47.011 [2024-04-24 20:52:11.463217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:47.011 [2024-04-24 20:52:11.463226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2851920 is same with the state(5) to be set 00:20:47.011 [2024-04-24 20:52:11.465023] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:47.011 [2024-04-24 20:52:11.465044] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:47.011 [2024-04-24 20:52:11.465053] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:47.011 [2024-04-24 20:52:11.465064] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:47.012 [2024-04-24 20:52:11.465143] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.012 task offset: 29952 on job bdev=Nvme3n1 fails 00:20:47.012 00:20:47.012 Latency(us) 00:20:47.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.012 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.012 Job: Nvme1n1 ended in about 1.03 seconds with error 00:20:47.012 Verification LBA range: start 0x0 length 0x400 00:20:47.012 Nvme1n1 : 1.03 129.12 8.07 62.13 0.00 329847.07 22282.24 298844.16 00:20:47.012 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.012 Job: Nvme2n1 ended in about 1.05 seconds with error 00:20:47.012 Verification LBA range: start 0x0 length 0x400 00:20:47.012 Nvme2n1 : 1.05 125.74 7.86 60.96 0.00 328745.80 23592.96 300591.79 00:20:47.012 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.012 Job: Nvme3n1 ended in about 1.01 seconds with error 00:20:47.012 Verification LBA range: start 0x0 length 0x400 00:20:47.012 Nvme3n1 : 1.01 194.44 12.15 63.49 0.00 230537.16 3713.71 242920.11 00:20:47.012 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.012 Job: Nvme4n1 ended in about 1.05 seconds with error 00:20:47.012 Verification LBA range: start 0x0 length 0x400 00:20:47.012 Nvme4n1 : 1.05 189.13 11.82 60.83 0.00 231866.41 17694.72 242920.11 00:20:47.012 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.012 Job: Nvme5n1 ended in about 1.01 seconds with error 00:20:47.012 Verification LBA range: start 0x0 length 0x400 00:20:47.012 Nvme5n1 : 1.01 189.93 11.87 63.31 0.00 220949.39 5024.43 246415.36 00:20:47.012 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.012 Job: Nvme6n1 ended in about 1.05 seconds with error 00:20:47.012 Verification LBA range: start 0x0 length 0x400 00:20:47.012 Nvme6n1 : 1.05 188.70 11.79 60.69 0.00 218835.00 31020.37 248162.99 00:20:47.012 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.012 Job: Nvme7n1 ended in about 1.06 seconds with error 00:20:47.012 Verification LBA range: start 0x0 length 0x400 00:20:47.012 Nvme7n1 : 1.06 181.66 11.35 60.55 0.00 218429.23 15182.51 251658.24 00:20:47.012 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.012 Job: Nvme8n1 ended in about 1.03 seconds with error 00:20:47.012 Verification LBA range: start 0x0 length 0x400 00:20:47.012 Nvme8n1 : 1.03 185.93 11.62 7.75 0.00 261661.76 33423.36 228939.09 00:20:47.012 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.012 Job: Nvme9n1 ended in about 1.06 seconds with error 00:20:47.012 Verification LBA range: start 0x0 length 0x400 00:20:47.012 Nvme9n1 : 1.06 120.84 7.55 60.42 0.00 273361.64 17476.27 256901.12 00:20:47.012 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:47.012 Job: Nvme10n1 ended in about 1.04 seconds with error 00:20:47.012 Verification LBA range: start 0x0 length 0x400 00:20:47.012 Nvme10n1 : 1.04 123.06 7.69 61.53 0.00 257712.36 40195.41 251658.24 00:20:47.012 =================================================================================================================== 00:20:47.012 Total : 1628.54 101.78 561.66 0.00 252601.17 3713.71 300591.79 00:20:47.012 [2024-04-24 20:52:11.492188] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:47.012 [2024-04-24 20:52:11.492230] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:47.012 [2024-04-24 20:52:11.492694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.492945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.492958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2884a60 with addr=10.0.0.2, port=4420 00:20:47.012 [2024-04-24 20:52:11.492969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2884a60 is same with the state(5) to be set 00:20:47.012 [2024-04-24 20:52:11.493300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.493654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.493665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x287fa50 with addr=10.0.0.2, port=4420 00:20:47.012 [2024-04-24 20:52:11.493672] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x287fa50 is same with the state(5) to be set 00:20:47.012 [2024-04-24 20:52:11.493995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.494189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.494199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28789d0 with addr=10.0.0.2, port=4420 00:20:47.012 [2024-04-24 20:52:11.494207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28789d0 is same with the state(5) to be set 00:20:47.012 [2024-04-24 20:52:11.494565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.494893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.494903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29f73b0 with addr=10.0.0.2, port=4420 00:20:47.012 [2024-04-24 20:52:11.494910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29f73b0 is same with the state(5) to be set 00:20:47.012 [2024-04-24 20:52:11.496258] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:47.012 [2024-04-24 20:52:11.496286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:47.012 [2024-04-24 20:52:11.496296] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:47.012 [2024-04-24 20:52:11.496305] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:47.012 [2024-04-24 20:52:11.496316] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:47.012 [2024-04-24 20:52:11.496717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.496917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.496928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2a17810 with addr=10.0.0.2, port=4420 00:20:47.012 [2024-04-24 20:52:11.496939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a17810 is same with the state(5) to be set 00:20:47.012 [2024-04-24 20:52:11.496952] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2884a60 (9): Bad file descriptor 00:20:47.012 [2024-04-24 20:52:11.496964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x287fa50 (9): Bad file descriptor 00:20:47.012 [2024-04-24 20:52:11.496973] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28789d0 (9): Bad file descriptor 00:20:47.012 [2024-04-24 20:52:11.496981] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29f73b0 (9): Bad file descriptor 00:20:47.012 [2024-04-24 20:52:11.497015] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.012 [2024-04-24 20:52:11.497027] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.012 [2024-04-24 20:52:11.497039] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.012 [2024-04-24 20:52:11.497050] bdev_nvme.c:2877:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:47.012 [2024-04-24 20:52:11.497891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.498137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.498148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x29ebcd0 with addr=10.0.0.2, port=4420 00:20:47.012 [2024-04-24 20:52:11.498156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29ebcd0 is same with the state(5) to be set 00:20:47.012 [2024-04-24 20:52:11.498497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.498818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.498828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2858440 with addr=10.0.0.2, port=4420 00:20:47.012 [2024-04-24 20:52:11.498836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2858440 is same with the state(5) to be set 00:20:47.012 [2024-04-24 20:52:11.499173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.499487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.499497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28801f0 with addr=10.0.0.2, port=4420 00:20:47.012 [2024-04-24 20:52:11.499504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28801f0 is same with the state(5) to be set 00:20:47.012 [2024-04-24 20:52:11.499801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.500172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.500182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2420310 with addr=10.0.0.2, port=4420 00:20:47.012 [2024-04-24 20:52:11.500189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420310 is same with the state(5) to be set 00:20:47.012 [2024-04-24 20:52:11.500479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.500681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:47.012 [2024-04-24 20:52:11.500691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x28804f0 with addr=10.0.0.2, port=4420 00:20:47.012 [2024-04-24 20:52:11.500699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28804f0 is same with the state(5) to be set 00:20:47.012 [2024-04-24 20:52:11.500711] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2a17810 (9): Bad file descriptor 00:20:47.012 [2024-04-24 20:52:11.500724] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:47.012 [2024-04-24 20:52:11.500749] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:47.012 [2024-04-24 20:52:11.500757] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:47.012 [2024-04-24 20:52:11.500769] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:47.012 [2024-04-24 20:52:11.500775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:47.012 [2024-04-24 20:52:11.500782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:47.012 [2024-04-24 20:52:11.500794] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:47.013 [2024-04-24 20:52:11.500801] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:47.013 [2024-04-24 20:52:11.500807] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:47.013 [2024-04-24 20:52:11.500817] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:47.013 [2024-04-24 20:52:11.500823] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:47.013 [2024-04-24 20:52:11.500830] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:47.013 [2024-04-24 20:52:11.500896] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.013 [2024-04-24 20:52:11.500905] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.013 [2024-04-24 20:52:11.500911] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.013 [2024-04-24 20:52:11.500917] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.013 [2024-04-24 20:52:11.500925] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x29ebcd0 (9): Bad file descriptor 00:20:47.013 [2024-04-24 20:52:11.500935] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2858440 (9): Bad file descriptor 00:20:47.013 [2024-04-24 20:52:11.500945] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28801f0 (9): Bad file descriptor 00:20:47.013 [2024-04-24 20:52:11.500953] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2420310 (9): Bad file descriptor 00:20:47.013 [2024-04-24 20:52:11.500962] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x28804f0 (9): Bad file descriptor 00:20:47.013 [2024-04-24 20:52:11.500970] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:47.013 [2024-04-24 20:52:11.500976] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:47.013 [2024-04-24 20:52:11.500982] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:47.013 [2024-04-24 20:52:11.501025] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.013 [2024-04-24 20:52:11.501034] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:47.013 [2024-04-24 20:52:11.501040] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:47.013 [2024-04-24 20:52:11.501048] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:47.013 [2024-04-24 20:52:11.501057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:47.013 [2024-04-24 20:52:11.501064] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:47.013 [2024-04-24 20:52:11.501073] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:47.013 [2024-04-24 20:52:11.501082] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:47.013 [2024-04-24 20:52:11.501088] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:47.013 [2024-04-24 20:52:11.501095] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:47.013 [2024-04-24 20:52:11.501104] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:47.013 [2024-04-24 20:52:11.501112] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:47.013 [2024-04-24 20:52:11.501119] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:47.013 [2024-04-24 20:52:11.501128] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:47.013 [2024-04-24 20:52:11.501134] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:47.013 [2024-04-24 20:52:11.501141] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:47.013 [2024-04-24 20:52:11.501170] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.013 [2024-04-24 20:52:11.501178] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.013 [2024-04-24 20:52:11.501183] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.013 [2024-04-24 20:52:11.501189] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.013 [2024-04-24 20:52:11.501196] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:47.274 20:52:11 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:47.274 20:52:11 -- target/shutdown.sh@139 -- # sleep 1 00:20:48.216 20:52:12 -- target/shutdown.sh@142 -- # kill -9 2833636 00:20:48.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2833636) - No such process 00:20:48.216 20:52:12 -- target/shutdown.sh@142 -- # true 00:20:48.216 20:52:12 -- target/shutdown.sh@144 -- # stoptarget 00:20:48.216 20:52:12 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:48.216 20:52:12 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:48.216 20:52:12 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.216 20:52:12 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:48.216 20:52:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:48.216 20:52:12 -- nvmf/common.sh@117 -- # sync 00:20:48.216 20:52:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:48.216 20:52:12 -- nvmf/common.sh@120 -- # set +e 00:20:48.216 20:52:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:48.216 20:52:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:48.216 rmmod nvme_tcp 00:20:48.216 rmmod nvme_fabrics 00:20:48.216 rmmod nvme_keyring 00:20:48.216 20:52:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:48.216 20:52:12 -- nvmf/common.sh@124 -- # set -e 00:20:48.216 20:52:12 -- nvmf/common.sh@125 -- # return 0 00:20:48.216 20:52:12 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:20:48.216 20:52:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:48.216 20:52:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:48.216 20:52:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:48.216 20:52:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:48.216 20:52:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:48.216 20:52:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.216 20:52:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:48.216 20:52:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.764 20:52:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:50.764 00:20:50.764 real 0m8.141s 00:20:50.764 user 0m20.764s 00:20:50.764 sys 0m1.258s 00:20:50.764 20:52:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:50.764 20:52:14 -- common/autotest_common.sh@10 -- # set +x 00:20:50.764 ************************************ 00:20:50.764 END TEST nvmf_shutdown_tc3 00:20:50.764 ************************************ 00:20:50.764 20:52:14 -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:50.764 00:20:50.764 real 0m32.376s 00:20:50.764 user 1m15.256s 00:20:50.764 sys 0m9.407s 00:20:50.764 20:52:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:50.764 20:52:14 -- common/autotest_common.sh@10 -- # set +x 00:20:50.764 ************************************ 00:20:50.764 END TEST nvmf_shutdown 00:20:50.764 ************************************ 00:20:50.764 20:52:14 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:20:50.764 20:52:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:50.764 20:52:14 -- common/autotest_common.sh@10 -- # set +x 00:20:50.764 20:52:14 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:20:50.764 20:52:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:50.764 20:52:14 -- common/autotest_common.sh@10 -- # set +x 00:20:50.764 20:52:14 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:20:50.764 20:52:14 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:50.764 20:52:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:50.764 20:52:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:50.764 20:52:14 -- common/autotest_common.sh@10 -- # set +x 00:20:50.764 ************************************ 00:20:50.764 START TEST nvmf_multicontroller 00:20:50.764 ************************************ 00:20:50.764 20:52:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:50.764 * Looking for test storage... 00:20:50.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:50.764 20:52:15 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.764 20:52:15 -- nvmf/common.sh@7 -- # uname -s 00:20:50.764 20:52:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.764 20:52:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.764 20:52:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.764 20:52:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.764 20:52:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:50.764 20:52:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:50.764 20:52:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.764 20:52:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:50.764 20:52:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.764 20:52:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:50.764 20:52:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:50.764 20:52:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:20:50.764 20:52:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.764 20:52:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:50.764 20:52:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:50.764 20:52:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.764 20:52:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.764 20:52:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.764 20:52:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.764 20:52:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.764 20:52:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.764 20:52:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.764 20:52:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.764 20:52:15 -- paths/export.sh@5 -- # export PATH 00:20:50.764 20:52:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.764 20:52:15 -- nvmf/common.sh@47 -- # : 0 00:20:50.764 20:52:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:50.764 20:52:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:50.764 20:52:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:50.764 20:52:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.764 20:52:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.764 20:52:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:50.764 20:52:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:50.764 20:52:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:50.764 20:52:15 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:50.764 20:52:15 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:50.764 20:52:15 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:50.764 20:52:15 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:50.764 20:52:15 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.764 20:52:15 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:50.764 20:52:15 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:50.764 20:52:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:50.764 20:52:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.764 20:52:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:50.764 20:52:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:50.765 20:52:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:50.765 20:52:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.765 20:52:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.765 20:52:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.765 20:52:15 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:20:50.765 20:52:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:50.765 20:52:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:50.765 20:52:15 -- common/autotest_common.sh@10 -- # set +x 00:20:57.357 20:52:21 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:57.357 20:52:21 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:57.357 20:52:21 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:57.357 20:52:21 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:57.357 20:52:21 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:57.357 20:52:21 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:57.357 20:52:21 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:57.357 20:52:21 -- nvmf/common.sh@295 -- # net_devs=() 00:20:57.357 20:52:21 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:57.357 20:52:21 -- nvmf/common.sh@296 -- # e810=() 00:20:57.357 20:52:21 -- nvmf/common.sh@296 -- # local -ga e810 00:20:57.357 20:52:21 -- nvmf/common.sh@297 -- # x722=() 00:20:57.357 20:52:21 -- nvmf/common.sh@297 -- # local -ga x722 00:20:57.357 20:52:21 -- nvmf/common.sh@298 -- # mlx=() 00:20:57.358 20:52:21 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:57.358 20:52:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:57.358 20:52:21 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:57.358 20:52:21 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:57.358 20:52:21 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:57.358 20:52:21 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:57.358 20:52:21 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:57.358 20:52:21 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:57.358 20:52:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:57.358 20:52:21 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:57.358 20:52:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:57.358 20:52:21 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:57.358 20:52:21 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:57.358 20:52:21 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:57.358 20:52:21 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:57.358 20:52:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:57.358 20:52:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:57.358 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:57.358 20:52:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:57.358 20:52:21 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:57.358 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:57.358 20:52:21 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:57.358 20:52:21 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:57.358 20:52:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.358 20:52:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:57.358 20:52:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.358 20:52:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:57.358 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:57.358 20:52:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.358 20:52:21 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:57.358 20:52:21 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:57.358 20:52:21 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:57.358 20:52:21 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:57.358 20:52:21 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:57.358 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:57.358 20:52:21 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:57.358 20:52:21 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:57.358 20:52:21 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:57.358 20:52:21 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:57.358 20:52:21 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:57.358 20:52:21 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:57.358 20:52:21 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:57.358 20:52:21 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:57.358 20:52:21 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:57.358 20:52:21 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:57.358 20:52:21 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:57.358 20:52:21 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:57.358 20:52:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:57.358 20:52:21 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:57.358 20:52:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:57.358 20:52:21 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:57.358 20:52:21 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:57.619 20:52:21 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:57.619 20:52:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:57.619 20:52:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:57.619 20:52:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:57.619 20:52:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:57.619 20:52:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:57.619 20:52:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:57.880 20:52:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:57.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:57.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:20:57.880 00:20:57.880 --- 10.0.0.2 ping statistics --- 00:20:57.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.880 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:20:57.880 20:52:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:57.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:57.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:20:57.880 00:20:57.880 --- 10.0.0.1 ping statistics --- 00:20:57.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:57.880 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:57.880 20:52:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:57.880 20:52:22 -- nvmf/common.sh@411 -- # return 0 00:20:57.880 20:52:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:57.880 20:52:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:57.880 20:52:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:57.880 20:52:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:57.880 20:52:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:57.880 20:52:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:57.880 20:52:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:57.880 20:52:22 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:57.880 20:52:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:57.880 20:52:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:57.880 20:52:22 -- common/autotest_common.sh@10 -- # set +x 00:20:57.880 20:52:22 -- nvmf/common.sh@470 -- # nvmfpid=2838598 00:20:57.880 20:52:22 -- nvmf/common.sh@471 -- # waitforlisten 2838598 00:20:57.880 20:52:22 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:57.880 20:52:22 -- common/autotest_common.sh@817 -- # '[' -z 2838598 ']' 00:20:57.880 20:52:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.880 20:52:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:57.880 20:52:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.880 20:52:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:57.880 20:52:22 -- common/autotest_common.sh@10 -- # set +x 00:20:57.880 [2024-04-24 20:52:22.389764] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:20:57.880 [2024-04-24 20:52:22.389827] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.880 EAL: No free 2048 kB hugepages reported on node 1 00:20:57.880 [2024-04-24 20:52:22.459285] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:58.139 [2024-04-24 20:52:22.531171] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:58.139 [2024-04-24 20:52:22.531211] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:58.139 [2024-04-24 20:52:22.531220] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:58.139 [2024-04-24 20:52:22.531227] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:58.139 [2024-04-24 20:52:22.531234] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:58.140 [2024-04-24 20:52:22.531367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.140 [2024-04-24 20:52:22.531525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:58.140 [2024-04-24 20:52:22.531525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:58.710 20:52:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:58.710 20:52:23 -- common/autotest_common.sh@850 -- # return 0 00:20:58.710 20:52:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:58.710 20:52:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:58.710 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:58.710 20:52:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.710 20:52:23 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:58.710 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.711 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:58.711 [2024-04-24 20:52:23.295409] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.711 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.711 20:52:23 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:58.711 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.711 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:58.711 Malloc0 00:20:58.711 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.711 20:52:23 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:58.711 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.711 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:58.711 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.711 20:52:23 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:58.711 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.711 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:58.971 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.971 20:52:23 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.971 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.971 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:58.971 [2024-04-24 20:52:23.361159] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.971 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.971 20:52:23 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:58.971 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.971 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:58.971 [2024-04-24 20:52:23.373085] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:58.971 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.971 20:52:23 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:58.971 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.971 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:58.971 Malloc1 00:20:58.971 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.971 20:52:23 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:58.971 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.971 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:58.971 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.971 20:52:23 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:58.971 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.971 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:58.971 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.971 20:52:23 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:58.971 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.971 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:58.971 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.971 20:52:23 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:58.971 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:58.971 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:58.971 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:58.971 20:52:23 -- host/multicontroller.sh@44 -- # bdevperf_pid=2838946 00:20:58.971 20:52:23 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:58.971 20:52:23 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:58.971 20:52:23 -- host/multicontroller.sh@47 -- # waitforlisten 2838946 /var/tmp/bdevperf.sock 00:20:58.971 20:52:23 -- common/autotest_common.sh@817 -- # '[' -z 2838946 ']' 00:20:58.971 20:52:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.971 20:52:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:58.971 20:52:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.971 20:52:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:58.971 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:59.233 20:52:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:59.233 20:52:23 -- common/autotest_common.sh@850 -- # return 0 00:20:59.233 20:52:23 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:59.233 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.233 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:59.233 NVMe0n1 00:20:59.233 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.233 20:52:23 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:59.233 20:52:23 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:59.233 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.233 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:59.233 20:52:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.233 1 00:20:59.233 20:52:23 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:59.233 20:52:23 -- common/autotest_common.sh@638 -- # local es=0 00:20:59.233 20:52:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:59.233 20:52:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:59.233 20:52:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:59.233 20:52:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:59.233 20:52:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:59.233 20:52:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:59.233 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.233 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:59.233 request: 00:20:59.233 { 00:20:59.233 "name": "NVMe0", 00:20:59.233 "trtype": "tcp", 00:20:59.233 "traddr": "10.0.0.2", 00:20:59.233 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:59.233 "hostaddr": "10.0.0.2", 00:20:59.233 "hostsvcid": "60000", 00:20:59.233 "adrfam": "ipv4", 00:20:59.233 "trsvcid": "4420", 00:20:59.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.234 "method": "bdev_nvme_attach_controller", 00:20:59.234 "req_id": 1 00:20:59.234 } 00:20:59.234 Got JSON-RPC error response 00:20:59.234 response: 00:20:59.234 { 00:20:59.234 "code": -114, 00:20:59.234 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:59.234 } 00:20:59.234 20:52:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:59.234 20:52:23 -- common/autotest_common.sh@641 -- # es=1 00:20:59.234 20:52:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:59.234 20:52:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:59.234 20:52:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:59.234 20:52:23 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:59.234 20:52:23 -- common/autotest_common.sh@638 -- # local es=0 00:20:59.234 20:52:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:59.234 20:52:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:59.234 20:52:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:59.234 20:52:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:59.234 20:52:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:59.234 20:52:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:59.234 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.234 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:59.234 request: 00:20:59.234 { 00:20:59.234 "name": "NVMe0", 00:20:59.234 "trtype": "tcp", 00:20:59.234 "traddr": "10.0.0.2", 00:20:59.234 "hostaddr": "10.0.0.2", 00:20:59.234 "hostsvcid": "60000", 00:20:59.234 "adrfam": "ipv4", 00:20:59.234 "trsvcid": "4420", 00:20:59.234 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:59.234 "method": "bdev_nvme_attach_controller", 00:20:59.234 "req_id": 1 00:20:59.234 } 00:20:59.234 Got JSON-RPC error response 00:20:59.234 response: 00:20:59.234 { 00:20:59.234 "code": -114, 00:20:59.234 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:59.234 } 00:20:59.234 20:52:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:59.234 20:52:23 -- common/autotest_common.sh@641 -- # es=1 00:20:59.234 20:52:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:59.234 20:52:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:59.234 20:52:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:59.234 20:52:23 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:59.234 20:52:23 -- common/autotest_common.sh@638 -- # local es=0 00:20:59.234 20:52:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:59.234 20:52:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:59.234 20:52:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:59.234 20:52:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:59.234 20:52:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:59.234 20:52:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:59.234 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.234 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:59.234 request: 00:20:59.234 { 00:20:59.234 "name": "NVMe0", 00:20:59.234 "trtype": "tcp", 00:20:59.234 "traddr": "10.0.0.2", 00:20:59.234 "hostaddr": "10.0.0.2", 00:20:59.234 "hostsvcid": "60000", 00:20:59.234 "adrfam": "ipv4", 00:20:59.234 "trsvcid": "4420", 00:20:59.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.234 "multipath": "disable", 00:20:59.234 "method": "bdev_nvme_attach_controller", 00:20:59.234 "req_id": 1 00:20:59.234 } 00:20:59.234 Got JSON-RPC error response 00:20:59.234 response: 00:20:59.234 { 00:20:59.234 "code": -114, 00:20:59.234 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:59.234 } 00:20:59.234 20:52:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:59.234 20:52:23 -- common/autotest_common.sh@641 -- # es=1 00:20:59.234 20:52:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:59.234 20:52:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:59.234 20:52:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:59.234 20:52:23 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:59.234 20:52:23 -- common/autotest_common.sh@638 -- # local es=0 00:20:59.234 20:52:23 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:59.234 20:52:23 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:59.234 20:52:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:59.234 20:52:23 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:59.234 20:52:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:59.234 20:52:23 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:59.234 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.234 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:59.495 request: 00:20:59.495 { 00:20:59.495 "name": "NVMe0", 00:20:59.495 "trtype": "tcp", 00:20:59.495 "traddr": "10.0.0.2", 00:20:59.495 "hostaddr": "10.0.0.2", 00:20:59.495 "hostsvcid": "60000", 00:20:59.495 "adrfam": "ipv4", 00:20:59.495 "trsvcid": "4420", 00:20:59.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.496 "multipath": "failover", 00:20:59.496 "method": "bdev_nvme_attach_controller", 00:20:59.496 "req_id": 1 00:20:59.496 } 00:20:59.496 Got JSON-RPC error response 00:20:59.496 response: 00:20:59.496 { 00:20:59.496 "code": -114, 00:20:59.496 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:59.496 } 00:20:59.496 20:52:23 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:59.496 20:52:23 -- common/autotest_common.sh@641 -- # es=1 00:20:59.496 20:52:23 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:59.496 20:52:23 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:59.496 20:52:23 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:59.496 20:52:23 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:59.496 20:52:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.496 20:52:23 -- common/autotest_common.sh@10 -- # set +x 00:20:59.496 00:20:59.496 20:52:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.496 20:52:24 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:59.496 20:52:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.496 20:52:24 -- common/autotest_common.sh@10 -- # set +x 00:20:59.496 20:52:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.496 20:52:24 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:59.496 20:52:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.496 20:52:24 -- common/autotest_common.sh@10 -- # set +x 00:20:59.758 00:20:59.758 20:52:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.758 20:52:24 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:59.758 20:52:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.758 20:52:24 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:59.758 20:52:24 -- common/autotest_common.sh@10 -- # set +x 00:20:59.758 20:52:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.758 20:52:24 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:59.758 20:52:24 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:01.143 0 00:21:01.143 20:52:25 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:01.143 20:52:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.143 20:52:25 -- common/autotest_common.sh@10 -- # set +x 00:21:01.143 20:52:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.143 20:52:25 -- host/multicontroller.sh@100 -- # killprocess 2838946 00:21:01.143 20:52:25 -- common/autotest_common.sh@936 -- # '[' -z 2838946 ']' 00:21:01.143 20:52:25 -- common/autotest_common.sh@940 -- # kill -0 2838946 00:21:01.143 20:52:25 -- common/autotest_common.sh@941 -- # uname 00:21:01.144 20:52:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:01.144 20:52:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2838946 00:21:01.144 20:52:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:01.144 20:52:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:01.144 20:52:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2838946' 00:21:01.144 killing process with pid 2838946 00:21:01.144 20:52:25 -- common/autotest_common.sh@955 -- # kill 2838946 00:21:01.144 20:52:25 -- common/autotest_common.sh@960 -- # wait 2838946 00:21:01.144 20:52:25 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:01.144 20:52:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.144 20:52:25 -- common/autotest_common.sh@10 -- # set +x 00:21:01.144 20:52:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.144 20:52:25 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:01.144 20:52:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:01.144 20:52:25 -- common/autotest_common.sh@10 -- # set +x 00:21:01.144 20:52:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:01.144 20:52:25 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:01.144 20:52:25 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:01.144 20:52:25 -- common/autotest_common.sh@1598 -- # read -r file 00:21:01.144 20:52:25 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:01.144 20:52:25 -- common/autotest_common.sh@1597 -- # sort -u 00:21:01.144 20:52:25 -- common/autotest_common.sh@1599 -- # cat 00:21:01.144 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:01.144 [2024-04-24 20:52:23.489418] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:21:01.144 [2024-04-24 20:52:23.489473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838946 ] 00:21:01.144 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.144 [2024-04-24 20:52:23.564352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.144 [2024-04-24 20:52:23.627087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.144 [2024-04-24 20:52:24.361403] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name d5a0f0a5-24a1-48c1-854f-ebfde1f15a5b already exists 00:21:01.144 [2024-04-24 20:52:24.361436] bdev.c:7651:bdev_register: *ERROR*: Unable to add uuid:d5a0f0a5-24a1-48c1-854f-ebfde1f15a5b alias for bdev NVMe1n1 00:21:01.144 [2024-04-24 20:52:24.361447] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:01.144 Running I/O for 1 seconds... 00:21:01.144 00:21:01.144 Latency(us) 00:21:01.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.144 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:01.144 NVMe0n1 : 1.00 19883.83 77.67 0.00 0.00 6419.86 3467.95 10977.28 00:21:01.144 =================================================================================================================== 00:21:01.144 Total : 19883.83 77.67 0.00 0.00 6419.86 3467.95 10977.28 00:21:01.144 Received shutdown signal, test time was about 1.000000 seconds 00:21:01.144 00:21:01.144 Latency(us) 00:21:01.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.144 =================================================================================================================== 00:21:01.144 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:01.144 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:01.144 20:52:25 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:01.144 20:52:25 -- common/autotest_common.sh@1598 -- # read -r file 00:21:01.144 20:52:25 -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:01.144 20:52:25 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:01.144 20:52:25 -- nvmf/common.sh@117 -- # sync 00:21:01.144 20:52:25 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:01.144 20:52:25 -- nvmf/common.sh@120 -- # set +e 00:21:01.144 20:52:25 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:01.144 20:52:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:01.144 rmmod nvme_tcp 00:21:01.144 rmmod nvme_fabrics 00:21:01.405 rmmod nvme_keyring 00:21:01.405 20:52:25 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:01.405 20:52:25 -- nvmf/common.sh@124 -- # set -e 00:21:01.405 20:52:25 -- nvmf/common.sh@125 -- # return 0 00:21:01.405 20:52:25 -- nvmf/common.sh@478 -- # '[' -n 2838598 ']' 00:21:01.405 20:52:25 -- nvmf/common.sh@479 -- # killprocess 2838598 00:21:01.405 20:52:25 -- common/autotest_common.sh@936 -- # '[' -z 2838598 ']' 00:21:01.405 20:52:25 -- common/autotest_common.sh@940 -- # kill -0 2838598 00:21:01.405 20:52:25 -- common/autotest_common.sh@941 -- # uname 00:21:01.405 20:52:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:01.405 20:52:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2838598 00:21:01.405 20:52:25 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:01.405 20:52:25 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:01.405 20:52:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2838598' 00:21:01.405 killing process with pid 2838598 00:21:01.405 20:52:25 -- common/autotest_common.sh@955 -- # kill 2838598 00:21:01.405 20:52:25 -- common/autotest_common.sh@960 -- # wait 2838598 00:21:01.405 20:52:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:01.405 20:52:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:01.405 20:52:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:01.405 20:52:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.405 20:52:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.405 20:52:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.405 20:52:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:01.405 20:52:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.952 20:52:28 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.952 00:21:03.952 real 0m12.954s 00:21:03.952 user 0m14.939s 00:21:03.952 sys 0m6.027s 00:21:03.952 20:52:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:03.952 20:52:28 -- common/autotest_common.sh@10 -- # set +x 00:21:03.952 ************************************ 00:21:03.952 END TEST nvmf_multicontroller 00:21:03.952 ************************************ 00:21:03.952 20:52:28 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:03.952 20:52:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:03.952 20:52:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:03.952 20:52:28 -- common/autotest_common.sh@10 -- # set +x 00:21:03.952 ************************************ 00:21:03.952 START TEST nvmf_aer 00:21:03.952 ************************************ 00:21:03.952 20:52:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:03.952 * Looking for test storage... 00:21:03.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:03.952 20:52:28 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.952 20:52:28 -- nvmf/common.sh@7 -- # uname -s 00:21:03.952 20:52:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.952 20:52:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.952 20:52:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.952 20:52:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.952 20:52:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.952 20:52:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.952 20:52:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.952 20:52:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.952 20:52:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.952 20:52:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.952 20:52:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:03.952 20:52:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:03.952 20:52:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.952 20:52:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.952 20:52:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.952 20:52:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.952 20:52:28 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.952 20:52:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.952 20:52:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.952 20:52:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.952 20:52:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.952 20:52:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.952 20:52:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.952 20:52:28 -- paths/export.sh@5 -- # export PATH 00:21:03.952 20:52:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.952 20:52:28 -- nvmf/common.sh@47 -- # : 0 00:21:03.952 20:52:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:03.952 20:52:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:03.952 20:52:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.952 20:52:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.952 20:52:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.952 20:52:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:03.952 20:52:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:03.952 20:52:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:03.952 20:52:28 -- host/aer.sh@11 -- # nvmftestinit 00:21:03.952 20:52:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:03.952 20:52:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.952 20:52:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:03.952 20:52:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:03.952 20:52:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:03.952 20:52:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.952 20:52:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.952 20:52:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.952 20:52:28 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:03.952 20:52:28 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:03.952 20:52:28 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:03.952 20:52:28 -- common/autotest_common.sh@10 -- # set +x 00:21:12.143 20:52:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:12.143 20:52:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.143 20:52:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.143 20:52:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.143 20:52:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.143 20:52:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.143 20:52:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.143 20:52:35 -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.143 20:52:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.143 20:52:35 -- nvmf/common.sh@296 -- # e810=() 00:21:12.143 20:52:35 -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.143 20:52:35 -- nvmf/common.sh@297 -- # x722=() 00:21:12.143 20:52:35 -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.143 20:52:35 -- nvmf/common.sh@298 -- # mlx=() 00:21:12.143 20:52:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.143 20:52:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.143 20:52:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.143 20:52:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.143 20:52:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.143 20:52:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.143 20:52:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.143 20:52:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.143 20:52:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.143 20:52:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.143 20:52:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.143 20:52:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.143 20:52:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.143 20:52:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.143 20:52:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.143 20:52:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.143 20:52:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:12.143 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:12.143 20:52:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.143 20:52:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:12.143 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:12.143 20:52:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.143 20:52:35 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.143 20:52:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.144 20:52:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.144 20:52:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:12.144 20:52:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.144 20:52:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:12.144 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:12.144 20:52:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.144 20:52:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.144 20:52:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.144 20:52:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:12.144 20:52:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.144 20:52:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:12.144 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:12.144 20:52:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.144 20:52:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:12.144 20:52:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:12.144 20:52:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:12.144 20:52:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:12.144 20:52:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:12.144 20:52:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.144 20:52:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.144 20:52:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.144 20:52:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.144 20:52:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.144 20:52:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.144 20:52:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.144 20:52:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.144 20:52:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.144 20:52:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.144 20:52:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.144 20:52:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.144 20:52:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.144 20:52:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.144 20:52:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.144 20:52:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.144 20:52:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.144 20:52:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.144 20:52:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.144 20:52:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:21:12.144 00:21:12.144 --- 10.0.0.2 ping statistics --- 00:21:12.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.144 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:21:12.144 20:52:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:21:12.144 00:21:12.144 --- 10.0.0.1 ping statistics --- 00:21:12.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.144 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:21:12.144 20:52:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.144 20:52:35 -- nvmf/common.sh@411 -- # return 0 00:21:12.144 20:52:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:12.144 20:52:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.144 20:52:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:12.144 20:52:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:12.144 20:52:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.144 20:52:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:12.144 20:52:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:12.144 20:52:35 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:12.144 20:52:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:12.144 20:52:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:12.144 20:52:35 -- common/autotest_common.sh@10 -- # set +x 00:21:12.144 20:52:35 -- nvmf/common.sh@470 -- # nvmfpid=2843595 00:21:12.144 20:52:35 -- nvmf/common.sh@471 -- # waitforlisten 2843595 00:21:12.144 20:52:35 -- common/autotest_common.sh@817 -- # '[' -z 2843595 ']' 00:21:12.144 20:52:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:12.144 20:52:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.144 20:52:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:12.144 20:52:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.144 20:52:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:12.144 20:52:35 -- common/autotest_common.sh@10 -- # set +x 00:21:12.144 [2024-04-24 20:52:35.781753] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:21:12.144 [2024-04-24 20:52:35.781820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.144 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.144 [2024-04-24 20:52:35.869154] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:12.144 [2024-04-24 20:52:35.963357] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.144 [2024-04-24 20:52:35.963413] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.144 [2024-04-24 20:52:35.963422] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.144 [2024-04-24 20:52:35.963432] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.144 [2024-04-24 20:52:35.963438] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.144 [2024-04-24 20:52:35.963571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.144 [2024-04-24 20:52:35.963701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.144 [2024-04-24 20:52:35.963866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.144 [2024-04-24 20:52:35.963868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.144 20:52:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:12.144 20:52:36 -- common/autotest_common.sh@850 -- # return 0 00:21:12.144 20:52:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:12.144 20:52:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:12.144 20:52:36 -- common/autotest_common.sh@10 -- # set +x 00:21:12.144 20:52:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.144 20:52:36 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:12.144 20:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.144 20:52:36 -- common/autotest_common.sh@10 -- # set +x 00:21:12.144 [2024-04-24 20:52:36.707547] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.144 20:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.144 20:52:36 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:12.144 20:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.144 20:52:36 -- common/autotest_common.sh@10 -- # set +x 00:21:12.144 Malloc0 00:21:12.144 20:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.144 20:52:36 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:12.144 20:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.144 20:52:36 -- common/autotest_common.sh@10 -- # set +x 00:21:12.144 20:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.144 20:52:36 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:12.144 20:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.144 20:52:36 -- common/autotest_common.sh@10 -- # set +x 00:21:12.144 20:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.144 20:52:36 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.144 20:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.144 20:52:36 -- common/autotest_common.sh@10 -- # set +x 00:21:12.144 [2024-04-24 20:52:36.766948] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.144 20:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.144 20:52:36 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:12.144 20:52:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.144 20:52:36 -- common/autotest_common.sh@10 -- # set +x 00:21:12.144 [2024-04-24 20:52:36.778767] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:12.404 [ 00:21:12.404 { 00:21:12.404 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:12.404 "subtype": "Discovery", 00:21:12.404 "listen_addresses": [], 00:21:12.404 "allow_any_host": true, 00:21:12.404 "hosts": [] 00:21:12.404 }, 00:21:12.404 { 00:21:12.404 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.404 "subtype": "NVMe", 00:21:12.404 "listen_addresses": [ 00:21:12.404 { 00:21:12.404 "transport": "TCP", 00:21:12.404 "trtype": "TCP", 00:21:12.404 "adrfam": "IPv4", 00:21:12.404 "traddr": "10.0.0.2", 00:21:12.404 "trsvcid": "4420" 00:21:12.404 } 00:21:12.404 ], 00:21:12.404 "allow_any_host": true, 00:21:12.404 "hosts": [], 00:21:12.404 "serial_number": "SPDK00000000000001", 00:21:12.404 "model_number": "SPDK bdev Controller", 00:21:12.404 "max_namespaces": 2, 00:21:12.404 "min_cntlid": 1, 00:21:12.404 "max_cntlid": 65519, 00:21:12.404 "namespaces": [ 00:21:12.404 { 00:21:12.404 "nsid": 1, 00:21:12.404 "bdev_name": "Malloc0", 00:21:12.404 "name": "Malloc0", 00:21:12.404 "nguid": "814AD201D0304C03BFD0915367BE9012", 00:21:12.404 "uuid": "814ad201-d030-4c03-bfd0-915367be9012" 00:21:12.404 } 00:21:12.404 ] 00:21:12.404 } 00:21:12.404 ] 00:21:12.404 20:52:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.404 20:52:36 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:12.404 20:52:36 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:12.404 20:52:36 -- host/aer.sh@33 -- # aerpid=2843669 00:21:12.404 20:52:36 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:12.404 20:52:36 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:12.404 20:52:36 -- common/autotest_common.sh@1251 -- # local i=0 00:21:12.404 20:52:36 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.404 20:52:36 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:21:12.404 20:52:36 -- common/autotest_common.sh@1254 -- # i=1 00:21:12.404 20:52:36 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:12.404 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.404 20:52:36 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.404 20:52:36 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:21:12.404 20:52:36 -- common/autotest_common.sh@1254 -- # i=2 00:21:12.404 20:52:36 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:21:12.404 20:52:37 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.404 20:52:37 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:12.404 20:52:37 -- common/autotest_common.sh@1262 -- # return 0 00:21:12.404 20:52:37 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:12.404 20:52:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.404 20:52:37 -- common/autotest_common.sh@10 -- # set +x 00:21:12.404 Malloc1 00:21:12.404 20:52:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.404 20:52:37 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:12.404 20:52:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.404 20:52:37 -- common/autotest_common.sh@10 -- # set +x 00:21:12.664 20:52:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.664 20:52:37 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:12.664 20:52:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.664 20:52:37 -- common/autotest_common.sh@10 -- # set +x 00:21:12.664 Asynchronous Event Request test 00:21:12.664 Attaching to 10.0.0.2 00:21:12.664 Attached to 10.0.0.2 00:21:12.664 Registering asynchronous event callbacks... 00:21:12.664 Starting namespace attribute notice tests for all controllers... 00:21:12.664 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:12.664 aer_cb - Changed Namespace 00:21:12.664 Cleaning up... 00:21:12.664 [ 00:21:12.664 { 00:21:12.664 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:12.664 "subtype": "Discovery", 00:21:12.664 "listen_addresses": [], 00:21:12.664 "allow_any_host": true, 00:21:12.664 "hosts": [] 00:21:12.664 }, 00:21:12.664 { 00:21:12.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.664 "subtype": "NVMe", 00:21:12.664 "listen_addresses": [ 00:21:12.664 { 00:21:12.664 "transport": "TCP", 00:21:12.664 "trtype": "TCP", 00:21:12.664 "adrfam": "IPv4", 00:21:12.664 "traddr": "10.0.0.2", 00:21:12.664 "trsvcid": "4420" 00:21:12.664 } 00:21:12.664 ], 00:21:12.664 "allow_any_host": true, 00:21:12.664 "hosts": [], 00:21:12.664 "serial_number": "SPDK00000000000001", 00:21:12.664 "model_number": "SPDK bdev Controller", 00:21:12.664 "max_namespaces": 2, 00:21:12.664 "min_cntlid": 1, 00:21:12.664 "max_cntlid": 65519, 00:21:12.664 "namespaces": [ 00:21:12.664 { 00:21:12.664 "nsid": 1, 00:21:12.664 "bdev_name": "Malloc0", 00:21:12.664 "name": "Malloc0", 00:21:12.664 "nguid": "814AD201D0304C03BFD0915367BE9012", 00:21:12.664 "uuid": "814ad201-d030-4c03-bfd0-915367be9012" 00:21:12.664 }, 00:21:12.664 { 00:21:12.664 "nsid": 2, 00:21:12.664 "bdev_name": "Malloc1", 00:21:12.664 "name": "Malloc1", 00:21:12.664 "nguid": "11EB01C73C184102A6799B076452A91C", 00:21:12.664 "uuid": "11eb01c7-3c18-4102-a679-9b076452a91c" 00:21:12.664 } 00:21:12.664 ] 00:21:12.664 } 00:21:12.664 ] 00:21:12.664 20:52:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.664 20:52:37 -- host/aer.sh@43 -- # wait 2843669 00:21:12.664 20:52:37 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:12.664 20:52:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.664 20:52:37 -- common/autotest_common.sh@10 -- # set +x 00:21:12.664 20:52:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.664 20:52:37 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:12.664 20:52:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.664 20:52:37 -- common/autotest_common.sh@10 -- # set +x 00:21:12.664 20:52:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.664 20:52:37 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.664 20:52:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.664 20:52:37 -- common/autotest_common.sh@10 -- # set +x 00:21:12.664 20:52:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.664 20:52:37 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:12.664 20:52:37 -- host/aer.sh@51 -- # nvmftestfini 00:21:12.664 20:52:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:12.664 20:52:37 -- nvmf/common.sh@117 -- # sync 00:21:12.664 20:52:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:12.664 20:52:37 -- nvmf/common.sh@120 -- # set +e 00:21:12.664 20:52:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:12.664 20:52:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:12.664 rmmod nvme_tcp 00:21:12.664 rmmod nvme_fabrics 00:21:12.664 rmmod nvme_keyring 00:21:12.664 20:52:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:12.664 20:52:37 -- nvmf/common.sh@124 -- # set -e 00:21:12.664 20:52:37 -- nvmf/common.sh@125 -- # return 0 00:21:12.664 20:52:37 -- nvmf/common.sh@478 -- # '[' -n 2843595 ']' 00:21:12.664 20:52:37 -- nvmf/common.sh@479 -- # killprocess 2843595 00:21:12.664 20:52:37 -- common/autotest_common.sh@936 -- # '[' -z 2843595 ']' 00:21:12.664 20:52:37 -- common/autotest_common.sh@940 -- # kill -0 2843595 00:21:12.664 20:52:37 -- common/autotest_common.sh@941 -- # uname 00:21:12.664 20:52:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:12.664 20:52:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2843595 00:21:12.664 20:52:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:12.664 20:52:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:12.665 20:52:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2843595' 00:21:12.665 killing process with pid 2843595 00:21:12.665 20:52:37 -- common/autotest_common.sh@955 -- # kill 2843595 00:21:12.665 [2024-04-24 20:52:37.240587] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:12.665 20:52:37 -- common/autotest_common.sh@960 -- # wait 2843595 00:21:12.925 20:52:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:12.925 20:52:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:12.925 20:52:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:12.925 20:52:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:12.925 20:52:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:12.925 20:52:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.925 20:52:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:12.925 20:52:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.837 20:52:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:14.837 00:21:14.837 real 0m11.142s 00:21:14.837 user 0m7.850s 00:21:14.837 sys 0m5.874s 00:21:14.837 20:52:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:14.837 20:52:39 -- common/autotest_common.sh@10 -- # set +x 00:21:14.837 ************************************ 00:21:14.837 END TEST nvmf_aer 00:21:14.837 ************************************ 00:21:15.097 20:52:39 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:15.097 20:52:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:15.097 20:52:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:15.097 20:52:39 -- common/autotest_common.sh@10 -- # set +x 00:21:15.097 ************************************ 00:21:15.097 START TEST nvmf_async_init 00:21:15.097 ************************************ 00:21:15.097 20:52:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:15.358 * Looking for test storage... 00:21:15.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:15.358 20:52:39 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.358 20:52:39 -- nvmf/common.sh@7 -- # uname -s 00:21:15.358 20:52:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.358 20:52:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.358 20:52:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.358 20:52:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.358 20:52:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.358 20:52:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.358 20:52:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.358 20:52:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.358 20:52:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.358 20:52:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.358 20:52:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:15.358 20:52:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:15.358 20:52:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.358 20:52:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.358 20:52:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.358 20:52:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.358 20:52:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.358 20:52:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.358 20:52:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.358 20:52:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.359 20:52:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.359 20:52:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.359 20:52:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.359 20:52:39 -- paths/export.sh@5 -- # export PATH 00:21:15.359 20:52:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.359 20:52:39 -- nvmf/common.sh@47 -- # : 0 00:21:15.359 20:52:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:15.359 20:52:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:15.359 20:52:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.359 20:52:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.359 20:52:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.359 20:52:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:15.359 20:52:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:15.359 20:52:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:15.359 20:52:39 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:15.359 20:52:39 -- host/async_init.sh@14 -- # null_block_size=512 00:21:15.359 20:52:39 -- host/async_init.sh@15 -- # null_bdev=null0 00:21:15.359 20:52:39 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:15.359 20:52:39 -- host/async_init.sh@20 -- # uuidgen 00:21:15.359 20:52:39 -- host/async_init.sh@20 -- # tr -d - 00:21:15.359 20:52:39 -- host/async_init.sh@20 -- # nguid=c1fbfa242bf045a38e1774178e3b1edc 00:21:15.359 20:52:39 -- host/async_init.sh@22 -- # nvmftestinit 00:21:15.359 20:52:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:15.359 20:52:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.359 20:52:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:15.359 20:52:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:15.359 20:52:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:15.359 20:52:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.359 20:52:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:15.359 20:52:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.359 20:52:39 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:15.359 20:52:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:15.359 20:52:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:15.359 20:52:39 -- common/autotest_common.sh@10 -- # set +x 00:21:23.503 20:52:46 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:23.503 20:52:46 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:23.503 20:52:46 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:23.503 20:52:46 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:23.503 20:52:46 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:23.503 20:52:46 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:23.503 20:52:46 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:23.503 20:52:46 -- nvmf/common.sh@295 -- # net_devs=() 00:21:23.503 20:52:46 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:23.503 20:52:46 -- nvmf/common.sh@296 -- # e810=() 00:21:23.503 20:52:46 -- nvmf/common.sh@296 -- # local -ga e810 00:21:23.503 20:52:46 -- nvmf/common.sh@297 -- # x722=() 00:21:23.503 20:52:46 -- nvmf/common.sh@297 -- # local -ga x722 00:21:23.503 20:52:46 -- nvmf/common.sh@298 -- # mlx=() 00:21:23.503 20:52:46 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:23.503 20:52:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.503 20:52:46 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.503 20:52:46 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.503 20:52:46 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.503 20:52:46 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.503 20:52:46 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.503 20:52:46 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.503 20:52:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.503 20:52:46 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.503 20:52:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.503 20:52:46 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.503 20:52:46 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:23.503 20:52:46 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:23.503 20:52:46 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:23.503 20:52:46 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:23.503 20:52:46 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:23.503 20:52:46 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:23.503 20:52:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.503 20:52:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:23.503 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:23.503 20:52:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.503 20:52:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:23.504 20:52:46 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:23.504 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:23.504 20:52:46 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:23.504 20:52:46 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.504 20:52:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.504 20:52:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:23.504 20:52:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.504 20:52:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:23.504 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:23.504 20:52:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.504 20:52:46 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:23.504 20:52:46 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.504 20:52:46 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:23.504 20:52:46 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.504 20:52:46 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:23.504 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:23.504 20:52:46 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.504 20:52:46 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:23.504 20:52:46 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:23.504 20:52:46 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:23.504 20:52:46 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:23.504 20:52:46 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:23.504 20:52:46 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.504 20:52:46 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:23.504 20:52:46 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:23.504 20:52:46 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:23.504 20:52:46 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:23.504 20:52:46 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:23.504 20:52:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:23.504 20:52:46 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.504 20:52:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:23.504 20:52:46 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:23.504 20:52:46 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:23.504 20:52:46 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:23.504 20:52:46 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:23.504 20:52:46 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:23.504 20:52:46 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:23.504 20:52:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:23.504 20:52:47 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:23.504 20:52:47 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:23.504 20:52:47 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:23.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:21:23.504 00:21:23.504 --- 10.0.0.2 ping statistics --- 00:21:23.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.504 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:21:23.504 20:52:47 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:23.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:21:23.504 00:21:23.504 --- 10.0.0.1 ping statistics --- 00:21:23.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.504 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:21:23.504 20:52:47 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.504 20:52:47 -- nvmf/common.sh@411 -- # return 0 00:21:23.504 20:52:47 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:23.504 20:52:47 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.504 20:52:47 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:23.504 20:52:47 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:23.504 20:52:47 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.504 20:52:47 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:23.504 20:52:47 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:23.504 20:52:47 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:23.504 20:52:47 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:23.504 20:52:47 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:23.504 20:52:47 -- common/autotest_common.sh@10 -- # set +x 00:21:23.504 20:52:47 -- nvmf/common.sh@470 -- # nvmfpid=2847995 00:21:23.504 20:52:47 -- nvmf/common.sh@471 -- # waitforlisten 2847995 00:21:23.504 20:52:47 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:23.504 20:52:47 -- common/autotest_common.sh@817 -- # '[' -z 2847995 ']' 00:21:23.504 20:52:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.504 20:52:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:23.504 20:52:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.504 20:52:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:23.504 20:52:47 -- common/autotest_common.sh@10 -- # set +x 00:21:23.504 [2024-04-24 20:52:47.251999] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:21:23.504 [2024-04-24 20:52:47.252090] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.504 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.504 [2024-04-24 20:52:47.343647] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.504 [2024-04-24 20:52:47.436512] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.504 [2024-04-24 20:52:47.436568] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.504 [2024-04-24 20:52:47.436576] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.504 [2024-04-24 20:52:47.436583] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.504 [2024-04-24 20:52:47.436589] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.504 [2024-04-24 20:52:47.436622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.504 20:52:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:23.504 20:52:48 -- common/autotest_common.sh@850 -- # return 0 00:21:23.504 20:52:48 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:23.504 20:52:48 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:23.504 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:23.765 20:52:48 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.765 20:52:48 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:23.765 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.765 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:23.765 [2024-04-24 20:52:48.176267] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.765 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.765 20:52:48 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:23.765 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.765 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:23.765 null0 00:21:23.765 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.765 20:52:48 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:23.765 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.765 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:23.765 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.765 20:52:48 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:23.765 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.765 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:23.765 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.765 20:52:48 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c1fbfa242bf045a38e1774178e3b1edc 00:21:23.765 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.765 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:23.765 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.765 20:52:48 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:23.765 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.765 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:23.765 [2024-04-24 20:52:48.236588] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.765 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:23.765 20:52:48 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:23.765 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:23.765 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.026 nvme0n1 00:21:24.027 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.027 20:52:48 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:24.027 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.027 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.027 [ 00:21:24.027 { 00:21:24.027 "name": "nvme0n1", 00:21:24.027 "aliases": [ 00:21:24.027 "c1fbfa24-2bf0-45a3-8e17-74178e3b1edc" 00:21:24.027 ], 00:21:24.027 "product_name": "NVMe disk", 00:21:24.027 "block_size": 512, 00:21:24.027 "num_blocks": 2097152, 00:21:24.027 "uuid": "c1fbfa24-2bf0-45a3-8e17-74178e3b1edc", 00:21:24.027 "assigned_rate_limits": { 00:21:24.027 "rw_ios_per_sec": 0, 00:21:24.027 "rw_mbytes_per_sec": 0, 00:21:24.027 "r_mbytes_per_sec": 0, 00:21:24.027 "w_mbytes_per_sec": 0 00:21:24.027 }, 00:21:24.027 "claimed": false, 00:21:24.027 "zoned": false, 00:21:24.027 "supported_io_types": { 00:21:24.027 "read": true, 00:21:24.027 "write": true, 00:21:24.027 "unmap": false, 00:21:24.027 "write_zeroes": true, 00:21:24.027 "flush": true, 00:21:24.027 "reset": true, 00:21:24.027 "compare": true, 00:21:24.027 "compare_and_write": true, 00:21:24.027 "abort": true, 00:21:24.027 "nvme_admin": true, 00:21:24.027 "nvme_io": true 00:21:24.027 }, 00:21:24.027 "memory_domains": [ 00:21:24.027 { 00:21:24.027 "dma_device_id": "system", 00:21:24.027 "dma_device_type": 1 00:21:24.027 } 00:21:24.027 ], 00:21:24.027 "driver_specific": { 00:21:24.027 "nvme": [ 00:21:24.027 { 00:21:24.027 "trid": { 00:21:24.027 "trtype": "TCP", 00:21:24.027 "adrfam": "IPv4", 00:21:24.027 "traddr": "10.0.0.2", 00:21:24.027 "trsvcid": "4420", 00:21:24.027 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:24.027 }, 00:21:24.027 "ctrlr_data": { 00:21:24.027 "cntlid": 1, 00:21:24.027 "vendor_id": "0x8086", 00:21:24.027 "model_number": "SPDK bdev Controller", 00:21:24.027 "serial_number": "00000000000000000000", 00:21:24.027 "firmware_revision": "24.05", 00:21:24.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:24.027 "oacs": { 00:21:24.027 "security": 0, 00:21:24.027 "format": 0, 00:21:24.027 "firmware": 0, 00:21:24.027 "ns_manage": 0 00:21:24.027 }, 00:21:24.027 "multi_ctrlr": true, 00:21:24.027 "ana_reporting": false 00:21:24.027 }, 00:21:24.027 "vs": { 00:21:24.027 "nvme_version": "1.3" 00:21:24.027 }, 00:21:24.027 "ns_data": { 00:21:24.027 "id": 1, 00:21:24.027 "can_share": true 00:21:24.027 } 00:21:24.027 } 00:21:24.027 ], 00:21:24.027 "mp_policy": "active_passive" 00:21:24.027 } 00:21:24.027 } 00:21:24.027 ] 00:21:24.027 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.027 20:52:48 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:24.027 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.027 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.027 [2024-04-24 20:52:48.506734] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:24.027 [2024-04-24 20:52:48.506815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x81b220 (9): Bad file descriptor 00:21:24.027 [2024-04-24 20:52:48.638831] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:24.027 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.027 20:52:48 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:24.027 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.027 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.027 [ 00:21:24.027 { 00:21:24.027 "name": "nvme0n1", 00:21:24.027 "aliases": [ 00:21:24.027 "c1fbfa24-2bf0-45a3-8e17-74178e3b1edc" 00:21:24.027 ], 00:21:24.027 "product_name": "NVMe disk", 00:21:24.027 "block_size": 512, 00:21:24.027 "num_blocks": 2097152, 00:21:24.027 "uuid": "c1fbfa24-2bf0-45a3-8e17-74178e3b1edc", 00:21:24.027 "assigned_rate_limits": { 00:21:24.027 "rw_ios_per_sec": 0, 00:21:24.027 "rw_mbytes_per_sec": 0, 00:21:24.027 "r_mbytes_per_sec": 0, 00:21:24.027 "w_mbytes_per_sec": 0 00:21:24.027 }, 00:21:24.027 "claimed": false, 00:21:24.027 "zoned": false, 00:21:24.027 "supported_io_types": { 00:21:24.027 "read": true, 00:21:24.027 "write": true, 00:21:24.027 "unmap": false, 00:21:24.027 "write_zeroes": true, 00:21:24.027 "flush": true, 00:21:24.027 "reset": true, 00:21:24.027 "compare": true, 00:21:24.027 "compare_and_write": true, 00:21:24.027 "abort": true, 00:21:24.027 "nvme_admin": true, 00:21:24.027 "nvme_io": true 00:21:24.027 }, 00:21:24.027 "memory_domains": [ 00:21:24.027 { 00:21:24.027 "dma_device_id": "system", 00:21:24.027 "dma_device_type": 1 00:21:24.027 } 00:21:24.027 ], 00:21:24.027 "driver_specific": { 00:21:24.027 "nvme": [ 00:21:24.027 { 00:21:24.027 "trid": { 00:21:24.027 "trtype": "TCP", 00:21:24.027 "adrfam": "IPv4", 00:21:24.027 "traddr": "10.0.0.2", 00:21:24.027 "trsvcid": "4420", 00:21:24.027 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:24.027 }, 00:21:24.027 "ctrlr_data": { 00:21:24.027 "cntlid": 2, 00:21:24.027 "vendor_id": "0x8086", 00:21:24.027 "model_number": "SPDK bdev Controller", 00:21:24.027 "serial_number": "00000000000000000000", 00:21:24.027 "firmware_revision": "24.05", 00:21:24.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:24.027 "oacs": { 00:21:24.027 "security": 0, 00:21:24.027 "format": 0, 00:21:24.027 "firmware": 0, 00:21:24.027 "ns_manage": 0 00:21:24.027 }, 00:21:24.027 "multi_ctrlr": true, 00:21:24.027 "ana_reporting": false 00:21:24.027 }, 00:21:24.027 "vs": { 00:21:24.027 "nvme_version": "1.3" 00:21:24.027 }, 00:21:24.027 "ns_data": { 00:21:24.027 "id": 1, 00:21:24.027 "can_share": true 00:21:24.027 } 00:21:24.027 } 00:21:24.027 ], 00:21:24.027 "mp_policy": "active_passive" 00:21:24.027 } 00:21:24.027 } 00:21:24.027 ] 00:21:24.027 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.027 20:52:48 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.027 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.027 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.289 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.289 20:52:48 -- host/async_init.sh@53 -- # mktemp 00:21:24.289 20:52:48 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.2tLOp24NaC 00:21:24.289 20:52:48 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:24.289 20:52:48 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.2tLOp24NaC 00:21:24.289 20:52:48 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:24.289 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.289 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.289 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.289 20:52:48 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:24.289 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.289 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.289 [2024-04-24 20:52:48.711382] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.289 [2024-04-24 20:52:48.711546] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:24.289 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.289 20:52:48 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2tLOp24NaC 00:21:24.289 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.289 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.289 [2024-04-24 20:52:48.723416] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:24.289 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.289 20:52:48 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2tLOp24NaC 00:21:24.289 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.289 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.289 [2024-04-24 20:52:48.735447] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.289 [2024-04-24 20:52:48.735497] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:24.289 nvme0n1 00:21:24.289 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.289 20:52:48 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:24.289 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.289 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.289 [ 00:21:24.289 { 00:21:24.289 "name": "nvme0n1", 00:21:24.289 "aliases": [ 00:21:24.289 "c1fbfa24-2bf0-45a3-8e17-74178e3b1edc" 00:21:24.289 ], 00:21:24.289 "product_name": "NVMe disk", 00:21:24.289 "block_size": 512, 00:21:24.289 "num_blocks": 2097152, 00:21:24.289 "uuid": "c1fbfa24-2bf0-45a3-8e17-74178e3b1edc", 00:21:24.289 "assigned_rate_limits": { 00:21:24.289 "rw_ios_per_sec": 0, 00:21:24.289 "rw_mbytes_per_sec": 0, 00:21:24.289 "r_mbytes_per_sec": 0, 00:21:24.289 "w_mbytes_per_sec": 0 00:21:24.289 }, 00:21:24.289 "claimed": false, 00:21:24.289 "zoned": false, 00:21:24.289 "supported_io_types": { 00:21:24.289 "read": true, 00:21:24.289 "write": true, 00:21:24.289 "unmap": false, 00:21:24.289 "write_zeroes": true, 00:21:24.289 "flush": true, 00:21:24.289 "reset": true, 00:21:24.289 "compare": true, 00:21:24.289 "compare_and_write": true, 00:21:24.289 "abort": true, 00:21:24.289 "nvme_admin": true, 00:21:24.289 "nvme_io": true 00:21:24.289 }, 00:21:24.289 "memory_domains": [ 00:21:24.289 { 00:21:24.289 "dma_device_id": "system", 00:21:24.289 "dma_device_type": 1 00:21:24.289 } 00:21:24.289 ], 00:21:24.289 "driver_specific": { 00:21:24.289 "nvme": [ 00:21:24.289 { 00:21:24.289 "trid": { 00:21:24.289 "trtype": "TCP", 00:21:24.289 "adrfam": "IPv4", 00:21:24.289 "traddr": "10.0.0.2", 00:21:24.289 "trsvcid": "4421", 00:21:24.289 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:24.289 }, 00:21:24.289 "ctrlr_data": { 00:21:24.289 "cntlid": 3, 00:21:24.289 "vendor_id": "0x8086", 00:21:24.289 "model_number": "SPDK bdev Controller", 00:21:24.289 "serial_number": "00000000000000000000", 00:21:24.289 "firmware_revision": "24.05", 00:21:24.289 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:24.289 "oacs": { 00:21:24.289 "security": 0, 00:21:24.289 "format": 0, 00:21:24.289 "firmware": 0, 00:21:24.289 "ns_manage": 0 00:21:24.289 }, 00:21:24.289 "multi_ctrlr": true, 00:21:24.289 "ana_reporting": false 00:21:24.289 }, 00:21:24.289 "vs": { 00:21:24.289 "nvme_version": "1.3" 00:21:24.289 }, 00:21:24.289 "ns_data": { 00:21:24.289 "id": 1, 00:21:24.289 "can_share": true 00:21:24.289 } 00:21:24.289 } 00:21:24.289 ], 00:21:24.289 "mp_policy": "active_passive" 00:21:24.289 } 00:21:24.289 } 00:21:24.289 ] 00:21:24.289 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.289 20:52:48 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:24.289 20:52:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:24.289 20:52:48 -- common/autotest_common.sh@10 -- # set +x 00:21:24.289 20:52:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:24.289 20:52:48 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.2tLOp24NaC 00:21:24.289 20:52:48 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:24.289 20:52:48 -- host/async_init.sh@78 -- # nvmftestfini 00:21:24.289 20:52:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:24.289 20:52:48 -- nvmf/common.sh@117 -- # sync 00:21:24.289 20:52:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:24.289 20:52:48 -- nvmf/common.sh@120 -- # set +e 00:21:24.289 20:52:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:24.289 20:52:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:24.289 rmmod nvme_tcp 00:21:24.289 rmmod nvme_fabrics 00:21:24.289 rmmod nvme_keyring 00:21:24.289 20:52:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:24.289 20:52:48 -- nvmf/common.sh@124 -- # set -e 00:21:24.289 20:52:48 -- nvmf/common.sh@125 -- # return 0 00:21:24.289 20:52:48 -- nvmf/common.sh@478 -- # '[' -n 2847995 ']' 00:21:24.289 20:52:48 -- nvmf/common.sh@479 -- # killprocess 2847995 00:21:24.289 20:52:48 -- common/autotest_common.sh@936 -- # '[' -z 2847995 ']' 00:21:24.289 20:52:48 -- common/autotest_common.sh@940 -- # kill -0 2847995 00:21:24.289 20:52:48 -- common/autotest_common.sh@941 -- # uname 00:21:24.289 20:52:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:24.289 20:52:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2847995 00:21:24.551 20:52:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:24.551 20:52:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:24.551 20:52:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2847995' 00:21:24.551 killing process with pid 2847995 00:21:24.551 20:52:48 -- common/autotest_common.sh@955 -- # kill 2847995 00:21:24.551 [2024-04-24 20:52:48.978543] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:24.551 [2024-04-24 20:52:48.978586] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:24.551 20:52:48 -- common/autotest_common.sh@960 -- # wait 2847995 00:21:24.551 20:52:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:24.551 20:52:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:24.551 20:52:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:24.551 20:52:49 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:24.551 20:52:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:24.551 20:52:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.551 20:52:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.551 20:52:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.096 20:52:51 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:27.096 00:21:27.096 real 0m11.574s 00:21:27.096 user 0m4.202s 00:21:27.096 sys 0m5.942s 00:21:27.096 20:52:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:27.096 20:52:51 -- common/autotest_common.sh@10 -- # set +x 00:21:27.096 ************************************ 00:21:27.096 END TEST nvmf_async_init 00:21:27.096 ************************************ 00:21:27.096 20:52:51 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:27.096 20:52:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:27.096 20:52:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:27.096 20:52:51 -- common/autotest_common.sh@10 -- # set +x 00:21:27.096 ************************************ 00:21:27.096 START TEST dma 00:21:27.096 ************************************ 00:21:27.096 20:52:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:27.096 * Looking for test storage... 00:21:27.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:27.096 20:52:51 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.096 20:52:51 -- nvmf/common.sh@7 -- # uname -s 00:21:27.096 20:52:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.096 20:52:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.096 20:52:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.096 20:52:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.096 20:52:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.097 20:52:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.097 20:52:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.097 20:52:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.097 20:52:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.097 20:52:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.097 20:52:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:27.097 20:52:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:27.097 20:52:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.097 20:52:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.097 20:52:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.097 20:52:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.097 20:52:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.097 20:52:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.097 20:52:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.097 20:52:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.097 20:52:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.097 20:52:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.097 20:52:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.097 20:52:51 -- paths/export.sh@5 -- # export PATH 00:21:27.097 20:52:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.097 20:52:51 -- nvmf/common.sh@47 -- # : 0 00:21:27.097 20:52:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.097 20:52:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.097 20:52:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.097 20:52:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.097 20:52:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.097 20:52:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.097 20:52:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.097 20:52:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.097 20:52:51 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:27.097 20:52:51 -- host/dma.sh@13 -- # exit 0 00:21:27.097 00:21:27.097 real 0m0.144s 00:21:27.097 user 0m0.062s 00:21:27.097 sys 0m0.090s 00:21:27.097 20:52:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:27.097 20:52:51 -- common/autotest_common.sh@10 -- # set +x 00:21:27.097 ************************************ 00:21:27.097 END TEST dma 00:21:27.097 ************************************ 00:21:27.097 20:52:51 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:27.097 20:52:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:27.097 20:52:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:27.097 20:52:51 -- common/autotest_common.sh@10 -- # set +x 00:21:27.358 ************************************ 00:21:27.358 START TEST nvmf_identify 00:21:27.358 ************************************ 00:21:27.358 20:52:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:27.358 * Looking for test storage... 00:21:27.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:27.358 20:52:51 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:27.358 20:52:51 -- nvmf/common.sh@7 -- # uname -s 00:21:27.358 20:52:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:27.358 20:52:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:27.358 20:52:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:27.358 20:52:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:27.358 20:52:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:27.358 20:52:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:27.358 20:52:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:27.358 20:52:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:27.359 20:52:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:27.359 20:52:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:27.359 20:52:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:27.359 20:52:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:27.359 20:52:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:27.359 20:52:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:27.359 20:52:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:27.359 20:52:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:27.359 20:52:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:27.359 20:52:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:27.359 20:52:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:27.359 20:52:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:27.359 20:52:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.359 20:52:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.359 20:52:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.359 20:52:51 -- paths/export.sh@5 -- # export PATH 00:21:27.359 20:52:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:27.359 20:52:51 -- nvmf/common.sh@47 -- # : 0 00:21:27.359 20:52:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:27.359 20:52:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:27.359 20:52:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:27.359 20:52:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:27.359 20:52:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:27.359 20:52:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:27.359 20:52:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:27.359 20:52:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:27.359 20:52:51 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:27.359 20:52:51 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:27.359 20:52:51 -- host/identify.sh@14 -- # nvmftestinit 00:21:27.359 20:52:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:27.359 20:52:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:27.359 20:52:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:27.359 20:52:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:27.359 20:52:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:27.359 20:52:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:27.359 20:52:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:27.359 20:52:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:27.359 20:52:51 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:27.359 20:52:51 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:27.359 20:52:51 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:27.359 20:52:51 -- common/autotest_common.sh@10 -- # set +x 00:21:35.497 20:52:58 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:35.497 20:52:58 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:35.497 20:52:58 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:35.497 20:52:58 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:35.497 20:52:58 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:35.498 20:52:58 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:35.498 20:52:58 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:35.498 20:52:58 -- nvmf/common.sh@295 -- # net_devs=() 00:21:35.498 20:52:58 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:35.498 20:52:58 -- nvmf/common.sh@296 -- # e810=() 00:21:35.498 20:52:58 -- nvmf/common.sh@296 -- # local -ga e810 00:21:35.498 20:52:58 -- nvmf/common.sh@297 -- # x722=() 00:21:35.498 20:52:58 -- nvmf/common.sh@297 -- # local -ga x722 00:21:35.498 20:52:58 -- nvmf/common.sh@298 -- # mlx=() 00:21:35.498 20:52:58 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:35.498 20:52:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:35.498 20:52:58 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:35.498 20:52:58 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:35.498 20:52:58 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:35.498 20:52:58 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:35.498 20:52:58 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:35.498 20:52:58 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:35.498 20:52:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:35.498 20:52:58 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:35.498 20:52:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:35.498 20:52:58 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:35.498 20:52:58 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:35.498 20:52:58 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:35.498 20:52:58 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:35.498 20:52:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.498 20:52:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:35.498 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:35.498 20:52:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:35.498 20:52:58 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:35.498 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:35.498 20:52:58 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:35.498 20:52:58 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.498 20:52:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.498 20:52:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:35.498 20:52:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.498 20:52:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:35.498 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:35.498 20:52:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.498 20:52:58 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:35.498 20:52:58 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:35.498 20:52:58 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:35.498 20:52:58 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:35.498 20:52:58 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:35.498 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:35.498 20:52:58 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:35.498 20:52:58 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:35.498 20:52:58 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:35.498 20:52:58 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:35.498 20:52:58 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:35.498 20:52:58 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:35.498 20:52:58 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:35.498 20:52:58 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:35.498 20:52:58 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:35.498 20:52:58 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:35.498 20:52:58 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:35.498 20:52:58 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:35.498 20:52:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:35.498 20:52:58 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:35.498 20:52:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:35.498 20:52:58 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:35.498 20:52:58 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:35.498 20:52:58 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:35.498 20:52:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:35.498 20:52:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:35.498 20:52:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:35.498 20:52:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:35.498 20:52:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:35.498 20:52:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:35.498 20:52:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:35.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:35.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:21:35.498 00:21:35.498 --- 10.0.0.2 ping statistics --- 00:21:35.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.498 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:21:35.498 20:52:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:35.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:35.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:21:35.498 00:21:35.498 --- 10.0.0.1 ping statistics --- 00:21:35.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:35.498 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:21:35.498 20:52:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:35.498 20:52:59 -- nvmf/common.sh@411 -- # return 0 00:21:35.498 20:52:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:35.498 20:52:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:35.498 20:52:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:35.498 20:52:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:35.498 20:52:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:35.498 20:52:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:35.498 20:52:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:35.498 20:52:59 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:35.498 20:52:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:35.498 20:52:59 -- common/autotest_common.sh@10 -- # set +x 00:21:35.498 20:52:59 -- host/identify.sh@19 -- # nvmfpid=2852726 00:21:35.498 20:52:59 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:35.498 20:52:59 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:35.498 20:52:59 -- host/identify.sh@23 -- # waitforlisten 2852726 00:21:35.498 20:52:59 -- common/autotest_common.sh@817 -- # '[' -z 2852726 ']' 00:21:35.498 20:52:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.498 20:52:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:35.498 20:52:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.498 20:52:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:35.498 20:52:59 -- common/autotest_common.sh@10 -- # set +x 00:21:35.498 [2024-04-24 20:52:59.280992] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:21:35.498 [2024-04-24 20:52:59.281056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.498 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.498 [2024-04-24 20:52:59.367054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:35.498 [2024-04-24 20:52:59.461816] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.498 [2024-04-24 20:52:59.461876] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.498 [2024-04-24 20:52:59.461884] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.498 [2024-04-24 20:52:59.461891] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.498 [2024-04-24 20:52:59.461897] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.498 [2024-04-24 20:52:59.462030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.498 [2024-04-24 20:52:59.462162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.498 [2024-04-24 20:52:59.462332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.498 [2024-04-24 20:52:59.462332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:35.498 20:53:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:35.498 20:53:00 -- common/autotest_common.sh@850 -- # return 0 00:21:35.498 20:53:00 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.498 20:53:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.498 20:53:00 -- common/autotest_common.sh@10 -- # set +x 00:21:35.498 [2024-04-24 20:53:00.118271] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.498 20:53:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.498 20:53:00 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:35.498 20:53:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:35.499 20:53:00 -- common/autotest_common.sh@10 -- # set +x 00:21:35.763 20:53:00 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:35.763 20:53:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.763 20:53:00 -- common/autotest_common.sh@10 -- # set +x 00:21:35.763 Malloc0 00:21:35.763 20:53:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.763 20:53:00 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:35.763 20:53:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.763 20:53:00 -- common/autotest_common.sh@10 -- # set +x 00:21:35.763 20:53:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.763 20:53:00 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:35.764 20:53:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.764 20:53:00 -- common/autotest_common.sh@10 -- # set +x 00:21:35.764 20:53:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.764 20:53:00 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:35.764 20:53:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.764 20:53:00 -- common/autotest_common.sh@10 -- # set +x 00:21:35.764 [2024-04-24 20:53:00.217604] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.764 20:53:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.764 20:53:00 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:35.764 20:53:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.764 20:53:00 -- common/autotest_common.sh@10 -- # set +x 00:21:35.764 20:53:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.764 20:53:00 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:35.764 20:53:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:35.764 20:53:00 -- common/autotest_common.sh@10 -- # set +x 00:21:35.764 [2024-04-24 20:53:00.241423] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:35.764 [ 00:21:35.764 { 00:21:35.764 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:35.764 "subtype": "Discovery", 00:21:35.764 "listen_addresses": [ 00:21:35.764 { 00:21:35.764 "transport": "TCP", 00:21:35.764 "trtype": "TCP", 00:21:35.764 "adrfam": "IPv4", 00:21:35.764 "traddr": "10.0.0.2", 00:21:35.764 "trsvcid": "4420" 00:21:35.764 } 00:21:35.764 ], 00:21:35.764 "allow_any_host": true, 00:21:35.764 "hosts": [] 00:21:35.764 }, 00:21:35.764 { 00:21:35.764 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:35.764 "subtype": "NVMe", 00:21:35.764 "listen_addresses": [ 00:21:35.764 { 00:21:35.764 "transport": "TCP", 00:21:35.764 "trtype": "TCP", 00:21:35.764 "adrfam": "IPv4", 00:21:35.764 "traddr": "10.0.0.2", 00:21:35.764 "trsvcid": "4420" 00:21:35.764 } 00:21:35.764 ], 00:21:35.764 "allow_any_host": true, 00:21:35.764 "hosts": [], 00:21:35.764 "serial_number": "SPDK00000000000001", 00:21:35.764 "model_number": "SPDK bdev Controller", 00:21:35.764 "max_namespaces": 32, 00:21:35.764 "min_cntlid": 1, 00:21:35.764 "max_cntlid": 65519, 00:21:35.764 "namespaces": [ 00:21:35.764 { 00:21:35.764 "nsid": 1, 00:21:35.764 "bdev_name": "Malloc0", 00:21:35.764 "name": "Malloc0", 00:21:35.764 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:35.764 "eui64": "ABCDEF0123456789", 00:21:35.764 "uuid": "2aa8971a-1889-454d-9ab6-18a1adaac6c8" 00:21:35.764 } 00:21:35.764 ] 00:21:35.764 } 00:21:35.764 ] 00:21:35.764 20:53:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:35.764 20:53:00 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:35.764 [2024-04-24 20:53:00.277838] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:21:35.764 [2024-04-24 20:53:00.277893] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852816 ] 00:21:35.764 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.764 [2024-04-24 20:53:00.310372] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:35.764 [2024-04-24 20:53:00.310420] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:35.764 [2024-04-24 20:53:00.310424] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:35.764 [2024-04-24 20:53:00.310435] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:35.764 [2024-04-24 20:53:00.310442] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:35.764 [2024-04-24 20:53:00.313762] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:35.764 [2024-04-24 20:53:00.313795] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x23c0cb0 0 00:21:35.764 [2024-04-24 20:53:00.321735] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:35.764 [2024-04-24 20:53:00.321748] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:35.764 [2024-04-24 20:53:00.321753] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:35.764 [2024-04-24 20:53:00.321756] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:35.764 [2024-04-24 20:53:00.321792] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.321797] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.321802] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.321816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:35.764 [2024-04-24 20:53:00.321831] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428a00, cid 0, qid 0 00:21:35.764 [2024-04-24 20:53:00.329735] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.764 [2024-04-24 20:53:00.329744] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.764 [2024-04-24 20:53:00.329747] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.329752] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428a00) on tqpair=0x23c0cb0 00:21:35.764 [2024-04-24 20:53:00.329764] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:35.764 [2024-04-24 20:53:00.329771] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:35.764 [2024-04-24 20:53:00.329776] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:35.764 [2024-04-24 20:53:00.329790] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.329794] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.329798] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.329805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.764 [2024-04-24 20:53:00.329817] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428a00, cid 0, qid 0 00:21:35.764 [2024-04-24 20:53:00.329904] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.764 [2024-04-24 20:53:00.329910] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.764 [2024-04-24 20:53:00.329914] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.329917] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428a00) on tqpair=0x23c0cb0 00:21:35.764 [2024-04-24 20:53:00.329923] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:35.764 [2024-04-24 20:53:00.329930] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:35.764 [2024-04-24 20:53:00.329937] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.329941] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.329944] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.329951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.764 [2024-04-24 20:53:00.329961] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428a00, cid 0, qid 0 00:21:35.764 [2024-04-24 20:53:00.330028] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.764 [2024-04-24 20:53:00.330034] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.764 [2024-04-24 20:53:00.330038] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330042] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428a00) on tqpair=0x23c0cb0 00:21:35.764 [2024-04-24 20:53:00.330050] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:35.764 [2024-04-24 20:53:00.330058] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:35.764 [2024-04-24 20:53:00.330064] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330068] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330071] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.330078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.764 [2024-04-24 20:53:00.330088] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428a00, cid 0, qid 0 00:21:35.764 [2024-04-24 20:53:00.330154] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.764 [2024-04-24 20:53:00.330160] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.764 [2024-04-24 20:53:00.330163] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330167] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428a00) on tqpair=0x23c0cb0 00:21:35.764 [2024-04-24 20:53:00.330173] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:35.764 [2024-04-24 20:53:00.330182] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330185] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330189] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.330196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.764 [2024-04-24 20:53:00.330205] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428a00, cid 0, qid 0 00:21:35.764 [2024-04-24 20:53:00.330269] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.764 [2024-04-24 20:53:00.330275] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.764 [2024-04-24 20:53:00.330279] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330282] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428a00) on tqpair=0x23c0cb0 00:21:35.764 [2024-04-24 20:53:00.330288] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:35.764 [2024-04-24 20:53:00.330292] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:35.764 [2024-04-24 20:53:00.330300] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:35.764 [2024-04-24 20:53:00.330405] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:35.764 [2024-04-24 20:53:00.330410] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:35.764 [2024-04-24 20:53:00.330419] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330422] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330426] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.330432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.764 [2024-04-24 20:53:00.330442] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428a00, cid 0, qid 0 00:21:35.764 [2024-04-24 20:53:00.330510] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.764 [2024-04-24 20:53:00.330516] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.764 [2024-04-24 20:53:00.330521] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330525] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428a00) on tqpair=0x23c0cb0 00:21:35.764 [2024-04-24 20:53:00.330531] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:35.764 [2024-04-24 20:53:00.330539] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330543] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330547] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.330553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.764 [2024-04-24 20:53:00.330563] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428a00, cid 0, qid 0 00:21:35.764 [2024-04-24 20:53:00.330625] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.764 [2024-04-24 20:53:00.330631] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.764 [2024-04-24 20:53:00.330634] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330638] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428a00) on tqpair=0x23c0cb0 00:21:35.764 [2024-04-24 20:53:00.330643] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:35.764 [2024-04-24 20:53:00.330648] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:35.764 [2024-04-24 20:53:00.330655] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:35.764 [2024-04-24 20:53:00.330663] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:35.764 [2024-04-24 20:53:00.330673] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330677] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.330683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.764 [2024-04-24 20:53:00.330693] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428a00, cid 0, qid 0 00:21:35.764 [2024-04-24 20:53:00.330792] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:35.764 [2024-04-24 20:53:00.330799] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:35.764 [2024-04-24 20:53:00.330803] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330807] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23c0cb0): datao=0, datal=4096, cccid=0 00:21:35.764 [2024-04-24 20:53:00.330812] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2428a00) on tqpair(0x23c0cb0): expected_datao=0, payload_size=4096 00:21:35.764 [2024-04-24 20:53:00.330816] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330833] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.330837] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.372803] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.764 [2024-04-24 20:53:00.372812] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.764 [2024-04-24 20:53:00.372815] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.372819] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428a00) on tqpair=0x23c0cb0 00:21:35.764 [2024-04-24 20:53:00.372828] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:35.764 [2024-04-24 20:53:00.372835] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:35.764 [2024-04-24 20:53:00.372840] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:35.764 [2024-04-24 20:53:00.372844] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:35.764 [2024-04-24 20:53:00.372849] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:35.764 [2024-04-24 20:53:00.372854] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:35.764 [2024-04-24 20:53:00.372863] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:35.764 [2024-04-24 20:53:00.372869] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.372873] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.372877] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.372884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:35.764 [2024-04-24 20:53:00.372895] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428a00, cid 0, qid 0 00:21:35.764 [2024-04-24 20:53:00.372960] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.764 [2024-04-24 20:53:00.372967] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.764 [2024-04-24 20:53:00.372970] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.372974] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428a00) on tqpair=0x23c0cb0 00:21:35.764 [2024-04-24 20:53:00.372982] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.372986] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.372989] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.372995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.764 [2024-04-24 20:53:00.373001] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.373005] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.373008] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.373014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.764 [2024-04-24 20:53:00.373020] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.373024] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.373027] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.373033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.764 [2024-04-24 20:53:00.373039] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.373042] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.373046] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.373051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.764 [2024-04-24 20:53:00.373056] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:35.764 [2024-04-24 20:53:00.373067] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:35.764 [2024-04-24 20:53:00.373075] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.373079] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23c0cb0) 00:21:35.764 [2024-04-24 20:53:00.373085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.764 [2024-04-24 20:53:00.373097] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428a00, cid 0, qid 0 00:21:35.764 [2024-04-24 20:53:00.373102] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428b60, cid 1, qid 0 00:21:35.764 [2024-04-24 20:53:00.373107] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428cc0, cid 2, qid 0 00:21:35.764 [2024-04-24 20:53:00.373111] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:35.764 [2024-04-24 20:53:00.373116] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428f80, cid 4, qid 0 00:21:35.764 [2024-04-24 20:53:00.373229] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.764 [2024-04-24 20:53:00.373235] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.764 [2024-04-24 20:53:00.373238] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.764 [2024-04-24 20:53:00.373242] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428f80) on tqpair=0x23c0cb0 00:21:35.765 [2024-04-24 20:53:00.373248] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:35.765 [2024-04-24 20:53:00.373253] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:35.765 [2024-04-24 20:53:00.373263] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373267] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23c0cb0) 00:21:35.765 [2024-04-24 20:53:00.373273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.765 [2024-04-24 20:53:00.373282] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428f80, cid 4, qid 0 00:21:35.765 [2024-04-24 20:53:00.373362] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:35.765 [2024-04-24 20:53:00.373368] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:35.765 [2024-04-24 20:53:00.373372] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373375] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23c0cb0): datao=0, datal=4096, cccid=4 00:21:35.765 [2024-04-24 20:53:00.373380] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2428f80) on tqpair(0x23c0cb0): expected_datao=0, payload_size=4096 00:21:35.765 [2024-04-24 20:53:00.373384] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373391] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373394] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373409] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.765 [2024-04-24 20:53:00.373415] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.765 [2024-04-24 20:53:00.373418] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373422] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428f80) on tqpair=0x23c0cb0 00:21:35.765 [2024-04-24 20:53:00.373434] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:35.765 [2024-04-24 20:53:00.373451] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373455] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23c0cb0) 00:21:35.765 [2024-04-24 20:53:00.373462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.765 [2024-04-24 20:53:00.373471] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373474] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373478] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23c0cb0) 00:21:35.765 [2024-04-24 20:53:00.373484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:35.765 [2024-04-24 20:53:00.373497] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428f80, cid 4, qid 0 00:21:35.765 [2024-04-24 20:53:00.373502] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24290e0, cid 5, qid 0 00:21:35.765 [2024-04-24 20:53:00.373602] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:35.765 [2024-04-24 20:53:00.373608] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:35.765 [2024-04-24 20:53:00.373611] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373615] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23c0cb0): datao=0, datal=1024, cccid=4 00:21:35.765 [2024-04-24 20:53:00.373619] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2428f80) on tqpair(0x23c0cb0): expected_datao=0, payload_size=1024 00:21:35.765 [2024-04-24 20:53:00.373623] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373629] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373633] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373638] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:35.765 [2024-04-24 20:53:00.373644] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:35.765 [2024-04-24 20:53:00.373647] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:35.765 [2024-04-24 20:53:00.373651] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x24290e0) on tqpair=0x23c0cb0 00:21:36.064 [2024-04-24 20:53:00.417735] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.064 [2024-04-24 20:53:00.417747] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.064 [2024-04-24 20:53:00.417750] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.064 [2024-04-24 20:53:00.417754] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428f80) on tqpair=0x23c0cb0 00:21:36.064 [2024-04-24 20:53:00.417766] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.064 [2024-04-24 20:53:00.417770] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23c0cb0) 00:21:36.064 [2024-04-24 20:53:00.417777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.064 [2024-04-24 20:53:00.417793] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428f80, cid 4, qid 0 00:21:36.064 [2024-04-24 20:53:00.417869] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.064 [2024-04-24 20:53:00.417875] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.064 [2024-04-24 20:53:00.417879] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.064 [2024-04-24 20:53:00.417882] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23c0cb0): datao=0, datal=3072, cccid=4 00:21:36.064 [2024-04-24 20:53:00.417887] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2428f80) on tqpair(0x23c0cb0): expected_datao=0, payload_size=3072 00:21:36.064 [2024-04-24 20:53:00.417891] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.064 [2024-04-24 20:53:00.417925] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.064 [2024-04-24 20:53:00.417928] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.064 [2024-04-24 20:53:00.459782] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.064 [2024-04-24 20:53:00.459791] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.064 [2024-04-24 20:53:00.459797] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.064 [2024-04-24 20:53:00.459801] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428f80) on tqpair=0x23c0cb0 00:21:36.064 [2024-04-24 20:53:00.459811] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.065 [2024-04-24 20:53:00.459815] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23c0cb0) 00:21:36.065 [2024-04-24 20:53:00.459822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.065 [2024-04-24 20:53:00.459836] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428f80, cid 4, qid 0 00:21:36.065 [2024-04-24 20:53:00.459907] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.065 [2024-04-24 20:53:00.459913] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.065 [2024-04-24 20:53:00.459917] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.065 [2024-04-24 20:53:00.459920] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23c0cb0): datao=0, datal=8, cccid=4 00:21:36.065 [2024-04-24 20:53:00.459925] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2428f80) on tqpair(0x23c0cb0): expected_datao=0, payload_size=8 00:21:36.065 [2024-04-24 20:53:00.459929] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.065 [2024-04-24 20:53:00.459936] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.065 [2024-04-24 20:53:00.459939] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.065 [2024-04-24 20:53:00.504732] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.065 [2024-04-24 20:53:00.504742] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.065 [2024-04-24 20:53:00.504745] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.065 [2024-04-24 20:53:00.504749] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428f80) on tqpair=0x23c0cb0 00:21:36.065 ===================================================== 00:21:36.065 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:36.065 ===================================================== 00:21:36.065 Controller Capabilities/Features 00:21:36.065 ================================ 00:21:36.065 Vendor ID: 0000 00:21:36.065 Subsystem Vendor ID: 0000 00:21:36.065 Serial Number: .................... 00:21:36.065 Model Number: ........................................ 00:21:36.065 Firmware Version: 24.05 00:21:36.065 Recommended Arb Burst: 0 00:21:36.065 IEEE OUI Identifier: 00 00 00 00:21:36.065 Multi-path I/O 00:21:36.065 May have multiple subsystem ports: No 00:21:36.065 May have multiple controllers: No 00:21:36.065 Associated with SR-IOV VF: No 00:21:36.065 Max Data Transfer Size: 131072 00:21:36.065 Max Number of Namespaces: 0 00:21:36.065 Max Number of I/O Queues: 1024 00:21:36.065 NVMe Specification Version (VS): 1.3 00:21:36.065 NVMe Specification Version (Identify): 1.3 00:21:36.065 Maximum Queue Entries: 128 00:21:36.065 Contiguous Queues Required: Yes 00:21:36.065 Arbitration Mechanisms Supported 00:21:36.065 Weighted Round Robin: Not Supported 00:21:36.065 Vendor Specific: Not Supported 00:21:36.065 Reset Timeout: 15000 ms 00:21:36.065 Doorbell Stride: 4 bytes 00:21:36.065 NVM Subsystem Reset: Not Supported 00:21:36.065 Command Sets Supported 00:21:36.065 NVM Command Set: Supported 00:21:36.065 Boot Partition: Not Supported 00:21:36.065 Memory Page Size Minimum: 4096 bytes 00:21:36.065 Memory Page Size Maximum: 4096 bytes 00:21:36.065 Persistent Memory Region: Not Supported 00:21:36.065 Optional Asynchronous Events Supported 00:21:36.065 Namespace Attribute Notices: Not Supported 00:21:36.065 Firmware Activation Notices: Not Supported 00:21:36.065 ANA Change Notices: Not Supported 00:21:36.065 PLE Aggregate Log Change Notices: Not Supported 00:21:36.065 LBA Status Info Alert Notices: Not Supported 00:21:36.065 EGE Aggregate Log Change Notices: Not Supported 00:21:36.065 Normal NVM Subsystem Shutdown event: Not Supported 00:21:36.065 Zone Descriptor Change Notices: Not Supported 00:21:36.065 Discovery Log Change Notices: Supported 00:21:36.065 Controller Attributes 00:21:36.065 128-bit Host Identifier: Not Supported 00:21:36.065 Non-Operational Permissive Mode: Not Supported 00:21:36.065 NVM Sets: Not Supported 00:21:36.065 Read Recovery Levels: Not Supported 00:21:36.065 Endurance Groups: Not Supported 00:21:36.065 Predictable Latency Mode: Not Supported 00:21:36.065 Traffic Based Keep ALive: Not Supported 00:21:36.065 Namespace Granularity: Not Supported 00:21:36.065 SQ Associations: Not Supported 00:21:36.065 UUID List: Not Supported 00:21:36.065 Multi-Domain Subsystem: Not Supported 00:21:36.065 Fixed Capacity Management: Not Supported 00:21:36.065 Variable Capacity Management: Not Supported 00:21:36.065 Delete Endurance Group: Not Supported 00:21:36.065 Delete NVM Set: Not Supported 00:21:36.065 Extended LBA Formats Supported: Not Supported 00:21:36.065 Flexible Data Placement Supported: Not Supported 00:21:36.065 00:21:36.065 Controller Memory Buffer Support 00:21:36.065 ================================ 00:21:36.065 Supported: No 00:21:36.065 00:21:36.065 Persistent Memory Region Support 00:21:36.065 ================================ 00:21:36.065 Supported: No 00:21:36.065 00:21:36.065 Admin Command Set Attributes 00:21:36.065 ============================ 00:21:36.065 Security Send/Receive: Not Supported 00:21:36.065 Format NVM: Not Supported 00:21:36.065 Firmware Activate/Download: Not Supported 00:21:36.065 Namespace Management: Not Supported 00:21:36.065 Device Self-Test: Not Supported 00:21:36.065 Directives: Not Supported 00:21:36.065 NVMe-MI: Not Supported 00:21:36.065 Virtualization Management: Not Supported 00:21:36.065 Doorbell Buffer Config: Not Supported 00:21:36.065 Get LBA Status Capability: Not Supported 00:21:36.065 Command & Feature Lockdown Capability: Not Supported 00:21:36.065 Abort Command Limit: 1 00:21:36.065 Async Event Request Limit: 4 00:21:36.065 Number of Firmware Slots: N/A 00:21:36.065 Firmware Slot 1 Read-Only: N/A 00:21:36.065 Firmware Activation Without Reset: N/A 00:21:36.065 Multiple Update Detection Support: N/A 00:21:36.065 Firmware Update Granularity: No Information Provided 00:21:36.065 Per-Namespace SMART Log: No 00:21:36.065 Asymmetric Namespace Access Log Page: Not Supported 00:21:36.065 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:36.065 Command Effects Log Page: Not Supported 00:21:36.065 Get Log Page Extended Data: Supported 00:21:36.065 Telemetry Log Pages: Not Supported 00:21:36.065 Persistent Event Log Pages: Not Supported 00:21:36.065 Supported Log Pages Log Page: May Support 00:21:36.065 Commands Supported & Effects Log Page: Not Supported 00:21:36.065 Feature Identifiers & Effects Log Page:May Support 00:21:36.065 NVMe-MI Commands & Effects Log Page: May Support 00:21:36.065 Data Area 4 for Telemetry Log: Not Supported 00:21:36.065 Error Log Page Entries Supported: 128 00:21:36.065 Keep Alive: Not Supported 00:21:36.065 00:21:36.065 NVM Command Set Attributes 00:21:36.065 ========================== 00:21:36.065 Submission Queue Entry Size 00:21:36.065 Max: 1 00:21:36.065 Min: 1 00:21:36.065 Completion Queue Entry Size 00:21:36.065 Max: 1 00:21:36.065 Min: 1 00:21:36.065 Number of Namespaces: 0 00:21:36.065 Compare Command: Not Supported 00:21:36.065 Write Uncorrectable Command: Not Supported 00:21:36.065 Dataset Management Command: Not Supported 00:21:36.065 Write Zeroes Command: Not Supported 00:21:36.065 Set Features Save Field: Not Supported 00:21:36.065 Reservations: Not Supported 00:21:36.065 Timestamp: Not Supported 00:21:36.065 Copy: Not Supported 00:21:36.065 Volatile Write Cache: Not Present 00:21:36.065 Atomic Write Unit (Normal): 1 00:21:36.065 Atomic Write Unit (PFail): 1 00:21:36.065 Atomic Compare & Write Unit: 1 00:21:36.065 Fused Compare & Write: Supported 00:21:36.065 Scatter-Gather List 00:21:36.065 SGL Command Set: Supported 00:21:36.065 SGL Keyed: Supported 00:21:36.065 SGL Bit Bucket Descriptor: Not Supported 00:21:36.065 SGL Metadata Pointer: Not Supported 00:21:36.065 Oversized SGL: Not Supported 00:21:36.065 SGL Metadata Address: Not Supported 00:21:36.065 SGL Offset: Supported 00:21:36.065 Transport SGL Data Block: Not Supported 00:21:36.065 Replay Protected Memory Block: Not Supported 00:21:36.065 00:21:36.065 Firmware Slot Information 00:21:36.065 ========================= 00:21:36.065 Active slot: 0 00:21:36.065 00:21:36.065 00:21:36.065 Error Log 00:21:36.065 ========= 00:21:36.065 00:21:36.065 Active Namespaces 00:21:36.065 ================= 00:21:36.065 Discovery Log Page 00:21:36.065 ================== 00:21:36.065 Generation Counter: 2 00:21:36.065 Number of Records: 2 00:21:36.065 Record Format: 0 00:21:36.065 00:21:36.065 Discovery Log Entry 0 00:21:36.065 ---------------------- 00:21:36.065 Transport Type: 3 (TCP) 00:21:36.065 Address Family: 1 (IPv4) 00:21:36.065 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:36.065 Entry Flags: 00:21:36.065 Duplicate Returned Information: 1 00:21:36.065 Explicit Persistent Connection Support for Discovery: 1 00:21:36.065 Transport Requirements: 00:21:36.065 Secure Channel: Not Required 00:21:36.065 Port ID: 0 (0x0000) 00:21:36.065 Controller ID: 65535 (0xffff) 00:21:36.066 Admin Max SQ Size: 128 00:21:36.066 Transport Service Identifier: 4420 00:21:36.066 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:36.066 Transport Address: 10.0.0.2 00:21:36.066 Discovery Log Entry 1 00:21:36.066 ---------------------- 00:21:36.066 Transport Type: 3 (TCP) 00:21:36.066 Address Family: 1 (IPv4) 00:21:36.066 Subsystem Type: 2 (NVM Subsystem) 00:21:36.066 Entry Flags: 00:21:36.066 Duplicate Returned Information: 0 00:21:36.066 Explicit Persistent Connection Support for Discovery: 0 00:21:36.066 Transport Requirements: 00:21:36.066 Secure Channel: Not Required 00:21:36.066 Port ID: 0 (0x0000) 00:21:36.066 Controller ID: 65535 (0xffff) 00:21:36.066 Admin Max SQ Size: 128 00:21:36.066 Transport Service Identifier: 4420 00:21:36.066 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:36.066 Transport Address: 10.0.0.2 [2024-04-24 20:53:00.504834] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:36.066 [2024-04-24 20:53:00.504847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.066 [2024-04-24 20:53:00.504854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.066 [2024-04-24 20:53:00.504860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.066 [2024-04-24 20:53:00.504866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.066 [2024-04-24 20:53:00.504874] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.504878] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.504882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.066 [2024-04-24 20:53:00.504889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.066 [2024-04-24 20:53:00.504902] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.066 [2024-04-24 20:53:00.504979] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.066 [2024-04-24 20:53:00.504985] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.066 [2024-04-24 20:53:00.504988] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.504992] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.066 [2024-04-24 20:53:00.504999] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505003] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505006] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.066 [2024-04-24 20:53:00.505015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.066 [2024-04-24 20:53:00.505028] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.066 [2024-04-24 20:53:00.505119] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.066 [2024-04-24 20:53:00.505125] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.066 [2024-04-24 20:53:00.505128] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505132] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.066 [2024-04-24 20:53:00.505138] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:36.066 [2024-04-24 20:53:00.505142] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:36.066 [2024-04-24 20:53:00.505151] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505155] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505159] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.066 [2024-04-24 20:53:00.505165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.066 [2024-04-24 20:53:00.505175] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.066 [2024-04-24 20:53:00.505241] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.066 [2024-04-24 20:53:00.505247] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.066 [2024-04-24 20:53:00.505251] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505254] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.066 [2024-04-24 20:53:00.505265] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505269] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505272] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.066 [2024-04-24 20:53:00.505279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.066 [2024-04-24 20:53:00.505289] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.066 [2024-04-24 20:53:00.505358] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.066 [2024-04-24 20:53:00.505364] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.066 [2024-04-24 20:53:00.505367] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505371] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.066 [2024-04-24 20:53:00.505381] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505385] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505388] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.066 [2024-04-24 20:53:00.505395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.066 [2024-04-24 20:53:00.505404] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.066 [2024-04-24 20:53:00.505470] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.066 [2024-04-24 20:53:00.505477] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.066 [2024-04-24 20:53:00.505480] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505484] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.066 [2024-04-24 20:53:00.505494] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505499] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505503] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.066 [2024-04-24 20:53:00.505509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.066 [2024-04-24 20:53:00.505519] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.066 [2024-04-24 20:53:00.505589] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.066 [2024-04-24 20:53:00.505595] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.066 [2024-04-24 20:53:00.505598] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505602] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.066 [2024-04-24 20:53:00.505612] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505616] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505619] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.066 [2024-04-24 20:53:00.505626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.066 [2024-04-24 20:53:00.505635] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.066 [2024-04-24 20:53:00.505708] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.066 [2024-04-24 20:53:00.505714] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.066 [2024-04-24 20:53:00.505717] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505721] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.066 [2024-04-24 20:53:00.505736] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505740] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505744] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.066 [2024-04-24 20:53:00.505750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.066 [2024-04-24 20:53:00.505760] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.066 [2024-04-24 20:53:00.505830] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.066 [2024-04-24 20:53:00.505836] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.066 [2024-04-24 20:53:00.505839] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505843] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.066 [2024-04-24 20:53:00.505853] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505857] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505860] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.066 [2024-04-24 20:53:00.505867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.066 [2024-04-24 20:53:00.505876] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.066 [2024-04-24 20:53:00.505940] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.066 [2024-04-24 20:53:00.505946] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.066 [2024-04-24 20:53:00.505949] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505953] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.066 [2024-04-24 20:53:00.505963] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505968] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.066 [2024-04-24 20:53:00.505972] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.067 [2024-04-24 20:53:00.505979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.067 [2024-04-24 20:53:00.505988] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.067 [2024-04-24 20:53:00.506054] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.067 [2024-04-24 20:53:00.506061] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.067 [2024-04-24 20:53:00.506064] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506067] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.067 [2024-04-24 20:53:00.506078] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506081] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506085] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.067 [2024-04-24 20:53:00.506091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.067 [2024-04-24 20:53:00.506101] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.067 [2024-04-24 20:53:00.506170] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.067 [2024-04-24 20:53:00.506176] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.067 [2024-04-24 20:53:00.506180] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506183] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.067 [2024-04-24 20:53:00.506193] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506197] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506200] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.067 [2024-04-24 20:53:00.506207] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.067 [2024-04-24 20:53:00.506217] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.067 [2024-04-24 20:53:00.506277] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.067 [2024-04-24 20:53:00.506283] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.067 [2024-04-24 20:53:00.506286] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506290] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.067 [2024-04-24 20:53:00.506300] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506304] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506307] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.067 [2024-04-24 20:53:00.506314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.067 [2024-04-24 20:53:00.506323] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.067 [2024-04-24 20:53:00.506421] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.067 [2024-04-24 20:53:00.506427] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.067 [2024-04-24 20:53:00.506430] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506434] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.067 [2024-04-24 20:53:00.506444] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506448] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506453] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.067 [2024-04-24 20:53:00.506459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.067 [2024-04-24 20:53:00.506469] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.067 [2024-04-24 20:53:00.506533] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.067 [2024-04-24 20:53:00.506539] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.067 [2024-04-24 20:53:00.506542] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506546] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.067 [2024-04-24 20:53:00.506556] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506560] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506563] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.067 [2024-04-24 20:53:00.506570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.067 [2024-04-24 20:53:00.506579] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.067 [2024-04-24 20:53:00.506642] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.067 [2024-04-24 20:53:00.506649] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.067 [2024-04-24 20:53:00.506652] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506656] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.067 [2024-04-24 20:53:00.506666] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506670] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506673] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.067 [2024-04-24 20:53:00.506680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.067 [2024-04-24 20:53:00.506689] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.067 [2024-04-24 20:53:00.506760] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.067 [2024-04-24 20:53:00.506766] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.067 [2024-04-24 20:53:00.506770] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506773] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.067 [2024-04-24 20:53:00.506783] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506787] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506791] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.067 [2024-04-24 20:53:00.506797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.067 [2024-04-24 20:53:00.506807] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.067 [2024-04-24 20:53:00.506901] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.067 [2024-04-24 20:53:00.506907] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.067 [2024-04-24 20:53:00.506910] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506914] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.067 [2024-04-24 20:53:00.506924] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506928] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.506931] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.067 [2024-04-24 20:53:00.506940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.067 [2024-04-24 20:53:00.506949] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.067 [2024-04-24 20:53:00.507013] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.067 [2024-04-24 20:53:00.507019] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.067 [2024-04-24 20:53:00.507023] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.507026] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.067 [2024-04-24 20:53:00.507036] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.507040] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.507043] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.067 [2024-04-24 20:53:00.507050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.067 [2024-04-24 20:53:00.507060] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.067 [2024-04-24 20:53:00.507123] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.067 [2024-04-24 20:53:00.507129] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.067 [2024-04-24 20:53:00.507132] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.507136] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.067 [2024-04-24 20:53:00.507146] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.507150] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.507153] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.067 [2024-04-24 20:53:00.507160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.067 [2024-04-24 20:53:00.507169] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.067 [2024-04-24 20:53:00.507235] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.067 [2024-04-24 20:53:00.507241] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.067 [2024-04-24 20:53:00.507245] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.507248] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.067 [2024-04-24 20:53:00.507258] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.507262] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.507266] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.067 [2024-04-24 20:53:00.507272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.067 [2024-04-24 20:53:00.507282] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.067 [2024-04-24 20:53:00.507350] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.067 [2024-04-24 20:53:00.507357] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.067 [2024-04-24 20:53:00.507360] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.507364] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.067 [2024-04-24 20:53:00.507374] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.067 [2024-04-24 20:53:00.507377] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507381] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.068 [2024-04-24 20:53:00.507387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.068 [2024-04-24 20:53:00.507399] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.068 [2024-04-24 20:53:00.507462] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.068 [2024-04-24 20:53:00.507469] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.068 [2024-04-24 20:53:00.507472] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507475] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.068 [2024-04-24 20:53:00.507486] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507489] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507493] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.068 [2024-04-24 20:53:00.507499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.068 [2024-04-24 20:53:00.507509] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.068 [2024-04-24 20:53:00.507578] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.068 [2024-04-24 20:53:00.507584] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.068 [2024-04-24 20:53:00.507587] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507591] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.068 [2024-04-24 20:53:00.507601] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507605] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507608] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.068 [2024-04-24 20:53:00.507615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.068 [2024-04-24 20:53:00.507624] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.068 [2024-04-24 20:53:00.507715] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.068 [2024-04-24 20:53:00.507721] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.068 [2024-04-24 20:53:00.507727] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507731] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.068 [2024-04-24 20:53:00.507741] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507745] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507748] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.068 [2024-04-24 20:53:00.507755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.068 [2024-04-24 20:53:00.507765] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.068 [2024-04-24 20:53:00.507835] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.068 [2024-04-24 20:53:00.507842] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.068 [2024-04-24 20:53:00.507845] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507848] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.068 [2024-04-24 20:53:00.507858] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507862] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507866] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.068 [2024-04-24 20:53:00.507872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.068 [2024-04-24 20:53:00.507884] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.068 [2024-04-24 20:53:00.507950] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.068 [2024-04-24 20:53:00.507956] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.068 [2024-04-24 20:53:00.507960] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507963] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.068 [2024-04-24 20:53:00.507973] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507977] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.507981] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.068 [2024-04-24 20:53:00.507987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.068 [2024-04-24 20:53:00.507997] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.068 [2024-04-24 20:53:00.508066] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.068 [2024-04-24 20:53:00.508072] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.068 [2024-04-24 20:53:00.508076] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508079] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.068 [2024-04-24 20:53:00.508090] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508093] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508097] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.068 [2024-04-24 20:53:00.508104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.068 [2024-04-24 20:53:00.508113] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.068 [2024-04-24 20:53:00.508179] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.068 [2024-04-24 20:53:00.508185] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.068 [2024-04-24 20:53:00.508188] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508192] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.068 [2024-04-24 20:53:00.508202] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508206] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508209] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.068 [2024-04-24 20:53:00.508216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.068 [2024-04-24 20:53:00.508225] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.068 [2024-04-24 20:53:00.508295] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.068 [2024-04-24 20:53:00.508301] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.068 [2024-04-24 20:53:00.508305] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508308] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.068 [2024-04-24 20:53:00.508318] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508322] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508325] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.068 [2024-04-24 20:53:00.508332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.068 [2024-04-24 20:53:00.508341] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.068 [2024-04-24 20:53:00.508407] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.068 [2024-04-24 20:53:00.508413] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.068 [2024-04-24 20:53:00.508417] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508420] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.068 [2024-04-24 20:53:00.508430] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508434] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508437] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.068 [2024-04-24 20:53:00.508444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.068 [2024-04-24 20:53:00.508454] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.068 [2024-04-24 20:53:00.508521] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.068 [2024-04-24 20:53:00.508527] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.068 [2024-04-24 20:53:00.508530] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508534] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.068 [2024-04-24 20:53:00.508544] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508548] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508551] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.068 [2024-04-24 20:53:00.508558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.068 [2024-04-24 20:53:00.508567] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.068 [2024-04-24 20:53:00.508633] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.068 [2024-04-24 20:53:00.508639] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.068 [2024-04-24 20:53:00.508643] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508646] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.068 [2024-04-24 20:53:00.508656] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508660] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.508663] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.068 [2024-04-24 20:53:00.508670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.068 [2024-04-24 20:53:00.508679] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.068 [2024-04-24 20:53:00.512735] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.068 [2024-04-24 20:53:00.512743] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.068 [2024-04-24 20:53:00.512746] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.068 [2024-04-24 20:53:00.512750] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.068 [2024-04-24 20:53:00.512760] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.512764] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.512768] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23c0cb0) 00:21:36.069 [2024-04-24 20:53:00.512774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.069 [2024-04-24 20:53:00.512785] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2428e20, cid 3, qid 0 00:21:36.069 [2024-04-24 20:53:00.512859] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.069 [2024-04-24 20:53:00.512869] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.069 [2024-04-24 20:53:00.512872] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.512876] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2428e20) on tqpair=0x23c0cb0 00:21:36.069 [2024-04-24 20:53:00.512884] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:21:36.069 00:21:36.069 20:53:00 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:36.069 [2024-04-24 20:53:00.551383] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:21:36.069 [2024-04-24 20:53:00.551448] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852940 ] 00:21:36.069 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.069 [2024-04-24 20:53:00.583257] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:36.069 [2024-04-24 20:53:00.583298] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:36.069 [2024-04-24 20:53:00.583303] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:36.069 [2024-04-24 20:53:00.583314] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:36.069 [2024-04-24 20:53:00.583321] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:36.069 [2024-04-24 20:53:00.586757] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:36.069 [2024-04-24 20:53:00.586784] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x57fcb0 0 00:21:36.069 [2024-04-24 20:53:00.594733] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:36.069 [2024-04-24 20:53:00.594742] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:36.069 [2024-04-24 20:53:00.594747] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:36.069 [2024-04-24 20:53:00.594750] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:36.069 [2024-04-24 20:53:00.594779] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.594785] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.594789] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x57fcb0) 00:21:36.069 [2024-04-24 20:53:00.594800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:36.069 [2024-04-24 20:53:00.594815] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7a00, cid 0, qid 0 00:21:36.069 [2024-04-24 20:53:00.601734] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.069 [2024-04-24 20:53:00.601743] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.069 [2024-04-24 20:53:00.601747] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.601751] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7a00) on tqpair=0x57fcb0 00:21:36.069 [2024-04-24 20:53:00.601760] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:36.069 [2024-04-24 20:53:00.601766] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:36.069 [2024-04-24 20:53:00.601771] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:36.069 [2024-04-24 20:53:00.601786] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.601790] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.601794] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x57fcb0) 00:21:36.069 [2024-04-24 20:53:00.601801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.069 [2024-04-24 20:53:00.601814] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7a00, cid 0, qid 0 00:21:36.069 [2024-04-24 20:53:00.601987] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.069 [2024-04-24 20:53:00.601993] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.069 [2024-04-24 20:53:00.601996] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.602000] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7a00) on tqpair=0x57fcb0 00:21:36.069 [2024-04-24 20:53:00.602005] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:36.069 [2024-04-24 20:53:00.602012] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:36.069 [2024-04-24 20:53:00.602018] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.602022] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.602025] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x57fcb0) 00:21:36.069 [2024-04-24 20:53:00.602032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.069 [2024-04-24 20:53:00.602042] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7a00, cid 0, qid 0 00:21:36.069 [2024-04-24 20:53:00.602235] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.069 [2024-04-24 20:53:00.602241] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.069 [2024-04-24 20:53:00.602244] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.602248] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7a00) on tqpair=0x57fcb0 00:21:36.069 [2024-04-24 20:53:00.602253] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:36.069 [2024-04-24 20:53:00.602261] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:36.069 [2024-04-24 20:53:00.602267] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.602271] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.602274] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x57fcb0) 00:21:36.069 [2024-04-24 20:53:00.602281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.069 [2024-04-24 20:53:00.602291] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7a00, cid 0, qid 0 00:21:36.069 [2024-04-24 20:53:00.602471] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.069 [2024-04-24 20:53:00.602477] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.069 [2024-04-24 20:53:00.602480] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.602484] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7a00) on tqpair=0x57fcb0 00:21:36.069 [2024-04-24 20:53:00.602489] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:36.069 [2024-04-24 20:53:00.602498] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.602502] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.069 [2024-04-24 20:53:00.602505] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x57fcb0) 00:21:36.069 [2024-04-24 20:53:00.602514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.069 [2024-04-24 20:53:00.602524] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7a00, cid 0, qid 0 00:21:36.069 [2024-04-24 20:53:00.602738] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.069 [2024-04-24 20:53:00.602745] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.070 [2024-04-24 20:53:00.602748] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.602752] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7a00) on tqpair=0x57fcb0 00:21:36.070 [2024-04-24 20:53:00.602757] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:36.070 [2024-04-24 20:53:00.602761] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:36.070 [2024-04-24 20:53:00.602768] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:36.070 [2024-04-24 20:53:00.602874] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:36.070 [2024-04-24 20:53:00.602877] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:36.070 [2024-04-24 20:53:00.602885] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.602888] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.602892] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x57fcb0) 00:21:36.070 [2024-04-24 20:53:00.602899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.070 [2024-04-24 20:53:00.602909] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7a00, cid 0, qid 0 00:21:36.070 [2024-04-24 20:53:00.603077] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.070 [2024-04-24 20:53:00.603084] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.070 [2024-04-24 20:53:00.603087] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.603091] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7a00) on tqpair=0x57fcb0 00:21:36.070 [2024-04-24 20:53:00.603095] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:36.070 [2024-04-24 20:53:00.603104] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.603108] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.603111] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x57fcb0) 00:21:36.070 [2024-04-24 20:53:00.603118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.070 [2024-04-24 20:53:00.603128] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7a00, cid 0, qid 0 00:21:36.070 [2024-04-24 20:53:00.603330] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.070 [2024-04-24 20:53:00.603336] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.070 [2024-04-24 20:53:00.603339] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.603343] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7a00) on tqpair=0x57fcb0 00:21:36.070 [2024-04-24 20:53:00.603347] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:36.070 [2024-04-24 20:53:00.603352] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:36.070 [2024-04-24 20:53:00.603359] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:36.070 [2024-04-24 20:53:00.603369] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:36.070 [2024-04-24 20:53:00.603378] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.603382] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x57fcb0) 00:21:36.070 [2024-04-24 20:53:00.603389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.070 [2024-04-24 20:53:00.603399] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7a00, cid 0, qid 0 00:21:36.070 [2024-04-24 20:53:00.603620] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.070 [2024-04-24 20:53:00.603626] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.070 [2024-04-24 20:53:00.603630] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.603634] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x57fcb0): datao=0, datal=4096, cccid=0 00:21:36.070 [2024-04-24 20:53:00.603638] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e7a00) on tqpair(0x57fcb0): expected_datao=0, payload_size=4096 00:21:36.070 [2024-04-24 20:53:00.603643] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.603654] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.603658] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.603834] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.070 [2024-04-24 20:53:00.603840] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.070 [2024-04-24 20:53:00.603844] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.603847] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7a00) on tqpair=0x57fcb0 00:21:36.070 [2024-04-24 20:53:00.603855] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:36.070 [2024-04-24 20:53:00.603860] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:36.070 [2024-04-24 20:53:00.603864] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:36.070 [2024-04-24 20:53:00.603868] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:36.070 [2024-04-24 20:53:00.603872] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:36.070 [2024-04-24 20:53:00.603877] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:36.070 [2024-04-24 20:53:00.603885] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:36.070 [2024-04-24 20:53:00.603891] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.603895] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.603899] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x57fcb0) 00:21:36.070 [2024-04-24 20:53:00.603906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.070 [2024-04-24 20:53:00.603916] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7a00, cid 0, qid 0 00:21:36.070 [2024-04-24 20:53:00.604116] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.070 [2024-04-24 20:53:00.604122] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.070 [2024-04-24 20:53:00.604125] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.604129] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7a00) on tqpair=0x57fcb0 00:21:36.070 [2024-04-24 20:53:00.604138] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.604141] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.604145] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x57fcb0) 00:21:36.070 [2024-04-24 20:53:00.604151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.070 [2024-04-24 20:53:00.604157] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.604161] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.604164] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x57fcb0) 00:21:36.070 [2024-04-24 20:53:00.604170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.070 [2024-04-24 20:53:00.604176] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.604180] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.604183] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x57fcb0) 00:21:36.070 [2024-04-24 20:53:00.604189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.070 [2024-04-24 20:53:00.604195] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.604198] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.604202] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x57fcb0) 00:21:36.070 [2024-04-24 20:53:00.604207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.070 [2024-04-24 20:53:00.604212] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:36.070 [2024-04-24 20:53:00.604222] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:36.070 [2024-04-24 20:53:00.604229] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.604232] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x57fcb0) 00:21:36.070 [2024-04-24 20:53:00.604239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.070 [2024-04-24 20:53:00.604250] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7a00, cid 0, qid 0 00:21:36.070 [2024-04-24 20:53:00.604256] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7b60, cid 1, qid 0 00:21:36.070 [2024-04-24 20:53:00.604260] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7cc0, cid 2, qid 0 00:21:36.070 [2024-04-24 20:53:00.604265] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7e20, cid 3, qid 0 00:21:36.070 [2024-04-24 20:53:00.604270] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7f80, cid 4, qid 0 00:21:36.070 [2024-04-24 20:53:00.604540] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.070 [2024-04-24 20:53:00.604546] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.070 [2024-04-24 20:53:00.604549] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.070 [2024-04-24 20:53:00.604553] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7f80) on tqpair=0x57fcb0 00:21:36.070 [2024-04-24 20:53:00.604558] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:36.070 [2024-04-24 20:53:00.604562] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:36.070 [2024-04-24 20:53:00.604572] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:36.070 [2024-04-24 20:53:00.604580] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:36.070 [2024-04-24 20:53:00.604586] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.604590] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.604593] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x57fcb0) 00:21:36.071 [2024-04-24 20:53:00.604600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:36.071 [2024-04-24 20:53:00.604609] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7f80, cid 4, qid 0 00:21:36.071 [2024-04-24 20:53:00.604792] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.071 [2024-04-24 20:53:00.604799] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.071 [2024-04-24 20:53:00.604802] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.604806] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7f80) on tqpair=0x57fcb0 00:21:36.071 [2024-04-24 20:53:00.604855] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:36.071 [2024-04-24 20:53:00.604864] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:36.071 [2024-04-24 20:53:00.604871] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.604875] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x57fcb0) 00:21:36.071 [2024-04-24 20:53:00.604881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.071 [2024-04-24 20:53:00.604891] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7f80, cid 4, qid 0 00:21:36.071 [2024-04-24 20:53:00.605103] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.071 [2024-04-24 20:53:00.605110] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.071 [2024-04-24 20:53:00.605113] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.605117] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x57fcb0): datao=0, datal=4096, cccid=4 00:21:36.071 [2024-04-24 20:53:00.605121] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e7f80) on tqpair(0x57fcb0): expected_datao=0, payload_size=4096 00:21:36.071 [2024-04-24 20:53:00.605125] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.605141] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.605145] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.646916] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.071 [2024-04-24 20:53:00.646926] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.071 [2024-04-24 20:53:00.646929] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.646933] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7f80) on tqpair=0x57fcb0 00:21:36.071 [2024-04-24 20:53:00.646944] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:36.071 [2024-04-24 20:53:00.646958] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:36.071 [2024-04-24 20:53:00.646967] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:36.071 [2024-04-24 20:53:00.646974] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.646978] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x57fcb0) 00:21:36.071 [2024-04-24 20:53:00.646985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.071 [2024-04-24 20:53:00.646999] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7f80, cid 4, qid 0 00:21:36.071 [2024-04-24 20:53:00.647240] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.071 [2024-04-24 20:53:00.647247] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.071 [2024-04-24 20:53:00.647250] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.647254] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x57fcb0): datao=0, datal=4096, cccid=4 00:21:36.071 [2024-04-24 20:53:00.647258] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e7f80) on tqpair(0x57fcb0): expected_datao=0, payload_size=4096 00:21:36.071 [2024-04-24 20:53:00.647262] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.647277] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.647281] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.688929] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.071 [2024-04-24 20:53:00.688937] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.071 [2024-04-24 20:53:00.688941] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.688945] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7f80) on tqpair=0x57fcb0 00:21:36.071 [2024-04-24 20:53:00.688959] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:36.071 [2024-04-24 20:53:00.688968] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:36.071 [2024-04-24 20:53:00.688976] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.688979] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x57fcb0) 00:21:36.071 [2024-04-24 20:53:00.688986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.071 [2024-04-24 20:53:00.688997] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7f80, cid 4, qid 0 00:21:36.071 [2024-04-24 20:53:00.689212] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.071 [2024-04-24 20:53:00.689218] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.071 [2024-04-24 20:53:00.689222] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.689225] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x57fcb0): datao=0, datal=4096, cccid=4 00:21:36.071 [2024-04-24 20:53:00.689229] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e7f80) on tqpair(0x57fcb0): expected_datao=0, payload_size=4096 00:21:36.071 [2024-04-24 20:53:00.689234] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.689249] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.071 [2024-04-24 20:53:00.689253] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.733734] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.334 [2024-04-24 20:53:00.733743] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.334 [2024-04-24 20:53:00.733747] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.733750] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7f80) on tqpair=0x57fcb0 00:21:36.334 [2024-04-24 20:53:00.733758] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:36.334 [2024-04-24 20:53:00.733766] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:36.334 [2024-04-24 20:53:00.733777] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:36.334 [2024-04-24 20:53:00.733785] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:36.334 [2024-04-24 20:53:00.733791] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:36.334 [2024-04-24 20:53:00.733795] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:36.334 [2024-04-24 20:53:00.733800] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:36.334 [2024-04-24 20:53:00.733805] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:36.334 [2024-04-24 20:53:00.733819] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.733823] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x57fcb0) 00:21:36.334 [2024-04-24 20:53:00.733829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.334 [2024-04-24 20:53:00.733836] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.733839] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.733843] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x57fcb0) 00:21:36.334 [2024-04-24 20:53:00.733849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:36.334 [2024-04-24 20:53:00.733862] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7f80, cid 4, qid 0 00:21:36.334 [2024-04-24 20:53:00.733868] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e80e0, cid 5, qid 0 00:21:36.334 [2024-04-24 20:53:00.733962] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.334 [2024-04-24 20:53:00.733968] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.334 [2024-04-24 20:53:00.733971] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.733975] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7f80) on tqpair=0x57fcb0 00:21:36.334 [2024-04-24 20:53:00.733982] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.334 [2024-04-24 20:53:00.733987] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.334 [2024-04-24 20:53:00.733991] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.733994] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e80e0) on tqpair=0x57fcb0 00:21:36.334 [2024-04-24 20:53:00.734003] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.734007] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x57fcb0) 00:21:36.334 [2024-04-24 20:53:00.734013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.334 [2024-04-24 20:53:00.734022] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e80e0, cid 5, qid 0 00:21:36.334 [2024-04-24 20:53:00.734184] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.334 [2024-04-24 20:53:00.734190] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.334 [2024-04-24 20:53:00.734194] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.734197] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e80e0) on tqpair=0x57fcb0 00:21:36.334 [2024-04-24 20:53:00.734206] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.734210] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x57fcb0) 00:21:36.334 [2024-04-24 20:53:00.734216] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.334 [2024-04-24 20:53:00.734227] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e80e0, cid 5, qid 0 00:21:36.334 [2024-04-24 20:53:00.734409] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.334 [2024-04-24 20:53:00.734415] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.334 [2024-04-24 20:53:00.734418] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.734422] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e80e0) on tqpair=0x57fcb0 00:21:36.334 [2024-04-24 20:53:00.734431] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.734434] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x57fcb0) 00:21:36.334 [2024-04-24 20:53:00.734441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.334 [2024-04-24 20:53:00.734450] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e80e0, cid 5, qid 0 00:21:36.334 [2024-04-24 20:53:00.734634] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.334 [2024-04-24 20:53:00.734640] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.334 [2024-04-24 20:53:00.734644] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.734647] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e80e0) on tqpair=0x57fcb0 00:21:36.334 [2024-04-24 20:53:00.734658] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.334 [2024-04-24 20:53:00.734662] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x57fcb0) 00:21:36.334 [2024-04-24 20:53:00.734668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.334 [2024-04-24 20:53:00.734676] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.734679] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x57fcb0) 00:21:36.335 [2024-04-24 20:53:00.734685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.335 [2024-04-24 20:53:00.734693] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.734696] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x57fcb0) 00:21:36.335 [2024-04-24 20:53:00.734702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.335 [2024-04-24 20:53:00.734709] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.734713] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x57fcb0) 00:21:36.335 [2024-04-24 20:53:00.734719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.335 [2024-04-24 20:53:00.734736] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e80e0, cid 5, qid 0 00:21:36.335 [2024-04-24 20:53:00.734741] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7f80, cid 4, qid 0 00:21:36.335 [2024-04-24 20:53:00.734746] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e8240, cid 6, qid 0 00:21:36.335 [2024-04-24 20:53:00.734751] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e83a0, cid 7, qid 0 00:21:36.335 [2024-04-24 20:53:00.734990] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.335 [2024-04-24 20:53:00.734996] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.335 [2024-04-24 20:53:00.735000] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735003] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x57fcb0): datao=0, datal=8192, cccid=5 00:21:36.335 [2024-04-24 20:53:00.735010] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e80e0) on tqpair(0x57fcb0): expected_datao=0, payload_size=8192 00:21:36.335 [2024-04-24 20:53:00.735014] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735106] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735110] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735116] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.335 [2024-04-24 20:53:00.735122] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.335 [2024-04-24 20:53:00.735125] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735128] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x57fcb0): datao=0, datal=512, cccid=4 00:21:36.335 [2024-04-24 20:53:00.735133] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e7f80) on tqpair(0x57fcb0): expected_datao=0, payload_size=512 00:21:36.335 [2024-04-24 20:53:00.735137] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735143] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735146] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735152] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.335 [2024-04-24 20:53:00.735158] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.335 [2024-04-24 20:53:00.735161] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735164] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x57fcb0): datao=0, datal=512, cccid=6 00:21:36.335 [2024-04-24 20:53:00.735168] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e8240) on tqpair(0x57fcb0): expected_datao=0, payload_size=512 00:21:36.335 [2024-04-24 20:53:00.735173] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735179] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735182] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735188] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:36.335 [2024-04-24 20:53:00.735193] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:36.335 [2024-04-24 20:53:00.735197] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735200] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x57fcb0): datao=0, datal=4096, cccid=7 00:21:36.335 [2024-04-24 20:53:00.735204] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e83a0) on tqpair(0x57fcb0): expected_datao=0, payload_size=4096 00:21:36.335 [2024-04-24 20:53:00.735208] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735215] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735218] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735230] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.335 [2024-04-24 20:53:00.735236] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.335 [2024-04-24 20:53:00.735240] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735243] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e80e0) on tqpair=0x57fcb0 00:21:36.335 [2024-04-24 20:53:00.735256] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.335 [2024-04-24 20:53:00.735262] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.335 [2024-04-24 20:53:00.735265] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735269] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7f80) on tqpair=0x57fcb0 00:21:36.335 [2024-04-24 20:53:00.735277] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.335 [2024-04-24 20:53:00.735283] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.335 [2024-04-24 20:53:00.735287] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735292] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e8240) on tqpair=0x57fcb0 00:21:36.335 [2024-04-24 20:53:00.735299] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.335 [2024-04-24 20:53:00.735305] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.335 [2024-04-24 20:53:00.735308] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.335 [2024-04-24 20:53:00.735311] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e83a0) on tqpair=0x57fcb0 00:21:36.335 ===================================================== 00:21:36.335 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:36.335 ===================================================== 00:21:36.335 Controller Capabilities/Features 00:21:36.335 ================================ 00:21:36.335 Vendor ID: 8086 00:21:36.335 Subsystem Vendor ID: 8086 00:21:36.335 Serial Number: SPDK00000000000001 00:21:36.335 Model Number: SPDK bdev Controller 00:21:36.335 Firmware Version: 24.05 00:21:36.335 Recommended Arb Burst: 6 00:21:36.335 IEEE OUI Identifier: e4 d2 5c 00:21:36.335 Multi-path I/O 00:21:36.335 May have multiple subsystem ports: Yes 00:21:36.335 May have multiple controllers: Yes 00:21:36.335 Associated with SR-IOV VF: No 00:21:36.335 Max Data Transfer Size: 131072 00:21:36.335 Max Number of Namespaces: 32 00:21:36.335 Max Number of I/O Queues: 127 00:21:36.335 NVMe Specification Version (VS): 1.3 00:21:36.335 NVMe Specification Version (Identify): 1.3 00:21:36.335 Maximum Queue Entries: 128 00:21:36.335 Contiguous Queues Required: Yes 00:21:36.335 Arbitration Mechanisms Supported 00:21:36.335 Weighted Round Robin: Not Supported 00:21:36.335 Vendor Specific: Not Supported 00:21:36.335 Reset Timeout: 15000 ms 00:21:36.335 Doorbell Stride: 4 bytes 00:21:36.335 NVM Subsystem Reset: Not Supported 00:21:36.335 Command Sets Supported 00:21:36.335 NVM Command Set: Supported 00:21:36.335 Boot Partition: Not Supported 00:21:36.335 Memory Page Size Minimum: 4096 bytes 00:21:36.335 Memory Page Size Maximum: 4096 bytes 00:21:36.335 Persistent Memory Region: Not Supported 00:21:36.335 Optional Asynchronous Events Supported 00:21:36.335 Namespace Attribute Notices: Supported 00:21:36.335 Firmware Activation Notices: Not Supported 00:21:36.335 ANA Change Notices: Not Supported 00:21:36.335 PLE Aggregate Log Change Notices: Not Supported 00:21:36.335 LBA Status Info Alert Notices: Not Supported 00:21:36.335 EGE Aggregate Log Change Notices: Not Supported 00:21:36.335 Normal NVM Subsystem Shutdown event: Not Supported 00:21:36.335 Zone Descriptor Change Notices: Not Supported 00:21:36.335 Discovery Log Change Notices: Not Supported 00:21:36.335 Controller Attributes 00:21:36.335 128-bit Host Identifier: Supported 00:21:36.335 Non-Operational Permissive Mode: Not Supported 00:21:36.335 NVM Sets: Not Supported 00:21:36.335 Read Recovery Levels: Not Supported 00:21:36.335 Endurance Groups: Not Supported 00:21:36.335 Predictable Latency Mode: Not Supported 00:21:36.335 Traffic Based Keep ALive: Not Supported 00:21:36.335 Namespace Granularity: Not Supported 00:21:36.335 SQ Associations: Not Supported 00:21:36.335 UUID List: Not Supported 00:21:36.335 Multi-Domain Subsystem: Not Supported 00:21:36.335 Fixed Capacity Management: Not Supported 00:21:36.335 Variable Capacity Management: Not Supported 00:21:36.335 Delete Endurance Group: Not Supported 00:21:36.335 Delete NVM Set: Not Supported 00:21:36.335 Extended LBA Formats Supported: Not Supported 00:21:36.335 Flexible Data Placement Supported: Not Supported 00:21:36.335 00:21:36.335 Controller Memory Buffer Support 00:21:36.335 ================================ 00:21:36.335 Supported: No 00:21:36.335 00:21:36.335 Persistent Memory Region Support 00:21:36.335 ================================ 00:21:36.335 Supported: No 00:21:36.335 00:21:36.335 Admin Command Set Attributes 00:21:36.335 ============================ 00:21:36.335 Security Send/Receive: Not Supported 00:21:36.335 Format NVM: Not Supported 00:21:36.335 Firmware Activate/Download: Not Supported 00:21:36.335 Namespace Management: Not Supported 00:21:36.335 Device Self-Test: Not Supported 00:21:36.335 Directives: Not Supported 00:21:36.335 NVMe-MI: Not Supported 00:21:36.336 Virtualization Management: Not Supported 00:21:36.336 Doorbell Buffer Config: Not Supported 00:21:36.336 Get LBA Status Capability: Not Supported 00:21:36.336 Command & Feature Lockdown Capability: Not Supported 00:21:36.336 Abort Command Limit: 4 00:21:36.336 Async Event Request Limit: 4 00:21:36.336 Number of Firmware Slots: N/A 00:21:36.336 Firmware Slot 1 Read-Only: N/A 00:21:36.336 Firmware Activation Without Reset: N/A 00:21:36.336 Multiple Update Detection Support: N/A 00:21:36.336 Firmware Update Granularity: No Information Provided 00:21:36.336 Per-Namespace SMART Log: No 00:21:36.336 Asymmetric Namespace Access Log Page: Not Supported 00:21:36.336 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:36.336 Command Effects Log Page: Supported 00:21:36.336 Get Log Page Extended Data: Supported 00:21:36.336 Telemetry Log Pages: Not Supported 00:21:36.336 Persistent Event Log Pages: Not Supported 00:21:36.336 Supported Log Pages Log Page: May Support 00:21:36.336 Commands Supported & Effects Log Page: Not Supported 00:21:36.336 Feature Identifiers & Effects Log Page:May Support 00:21:36.336 NVMe-MI Commands & Effects Log Page: May Support 00:21:36.336 Data Area 4 for Telemetry Log: Not Supported 00:21:36.336 Error Log Page Entries Supported: 128 00:21:36.336 Keep Alive: Supported 00:21:36.336 Keep Alive Granularity: 10000 ms 00:21:36.336 00:21:36.336 NVM Command Set Attributes 00:21:36.336 ========================== 00:21:36.336 Submission Queue Entry Size 00:21:36.336 Max: 64 00:21:36.336 Min: 64 00:21:36.336 Completion Queue Entry Size 00:21:36.336 Max: 16 00:21:36.336 Min: 16 00:21:36.336 Number of Namespaces: 32 00:21:36.336 Compare Command: Supported 00:21:36.336 Write Uncorrectable Command: Not Supported 00:21:36.336 Dataset Management Command: Supported 00:21:36.336 Write Zeroes Command: Supported 00:21:36.336 Set Features Save Field: Not Supported 00:21:36.336 Reservations: Supported 00:21:36.336 Timestamp: Not Supported 00:21:36.336 Copy: Supported 00:21:36.336 Volatile Write Cache: Present 00:21:36.336 Atomic Write Unit (Normal): 1 00:21:36.336 Atomic Write Unit (PFail): 1 00:21:36.336 Atomic Compare & Write Unit: 1 00:21:36.336 Fused Compare & Write: Supported 00:21:36.336 Scatter-Gather List 00:21:36.336 SGL Command Set: Supported 00:21:36.336 SGL Keyed: Supported 00:21:36.336 SGL Bit Bucket Descriptor: Not Supported 00:21:36.336 SGL Metadata Pointer: Not Supported 00:21:36.336 Oversized SGL: Not Supported 00:21:36.336 SGL Metadata Address: Not Supported 00:21:36.336 SGL Offset: Supported 00:21:36.336 Transport SGL Data Block: Not Supported 00:21:36.336 Replay Protected Memory Block: Not Supported 00:21:36.336 00:21:36.336 Firmware Slot Information 00:21:36.336 ========================= 00:21:36.336 Active slot: 1 00:21:36.336 Slot 1 Firmware Revision: 24.05 00:21:36.336 00:21:36.336 00:21:36.336 Commands Supported and Effects 00:21:36.336 ============================== 00:21:36.336 Admin Commands 00:21:36.336 -------------- 00:21:36.336 Get Log Page (02h): Supported 00:21:36.336 Identify (06h): Supported 00:21:36.336 Abort (08h): Supported 00:21:36.336 Set Features (09h): Supported 00:21:36.336 Get Features (0Ah): Supported 00:21:36.336 Asynchronous Event Request (0Ch): Supported 00:21:36.336 Keep Alive (18h): Supported 00:21:36.336 I/O Commands 00:21:36.336 ------------ 00:21:36.336 Flush (00h): Supported LBA-Change 00:21:36.336 Write (01h): Supported LBA-Change 00:21:36.336 Read (02h): Supported 00:21:36.336 Compare (05h): Supported 00:21:36.336 Write Zeroes (08h): Supported LBA-Change 00:21:36.336 Dataset Management (09h): Supported LBA-Change 00:21:36.336 Copy (19h): Supported LBA-Change 00:21:36.336 Unknown (79h): Supported LBA-Change 00:21:36.336 Unknown (7Ah): Supported 00:21:36.336 00:21:36.336 Error Log 00:21:36.336 ========= 00:21:36.336 00:21:36.336 Arbitration 00:21:36.336 =========== 00:21:36.336 Arbitration Burst: 1 00:21:36.336 00:21:36.336 Power Management 00:21:36.336 ================ 00:21:36.336 Number of Power States: 1 00:21:36.336 Current Power State: Power State #0 00:21:36.336 Power State #0: 00:21:36.336 Max Power: 0.00 W 00:21:36.336 Non-Operational State: Operational 00:21:36.336 Entry Latency: Not Reported 00:21:36.336 Exit Latency: Not Reported 00:21:36.336 Relative Read Throughput: 0 00:21:36.336 Relative Read Latency: 0 00:21:36.336 Relative Write Throughput: 0 00:21:36.336 Relative Write Latency: 0 00:21:36.336 Idle Power: Not Reported 00:21:36.336 Active Power: Not Reported 00:21:36.336 Non-Operational Permissive Mode: Not Supported 00:21:36.336 00:21:36.336 Health Information 00:21:36.336 ================== 00:21:36.336 Critical Warnings: 00:21:36.336 Available Spare Space: OK 00:21:36.336 Temperature: OK 00:21:36.336 Device Reliability: OK 00:21:36.336 Read Only: No 00:21:36.336 Volatile Memory Backup: OK 00:21:36.336 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:36.336 Temperature Threshold: [2024-04-24 20:53:00.735412] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.336 [2024-04-24 20:53:00.735417] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x57fcb0) 00:21:36.336 [2024-04-24 20:53:00.735423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.336 [2024-04-24 20:53:00.735434] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e83a0, cid 7, qid 0 00:21:36.336 [2024-04-24 20:53:00.735627] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.336 [2024-04-24 20:53:00.735633] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.336 [2024-04-24 20:53:00.735637] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.336 [2024-04-24 20:53:00.735640] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e83a0) on tqpair=0x57fcb0 00:21:36.336 [2024-04-24 20:53:00.735666] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:36.336 [2024-04-24 20:53:00.735677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.336 [2024-04-24 20:53:00.735683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.336 [2024-04-24 20:53:00.735689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.336 [2024-04-24 20:53:00.735695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:36.336 [2024-04-24 20:53:00.735703] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.336 [2024-04-24 20:53:00.735706] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.336 [2024-04-24 20:53:00.735710] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x57fcb0) 00:21:36.336 [2024-04-24 20:53:00.735717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.336 [2024-04-24 20:53:00.735732] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7e20, cid 3, qid 0 00:21:36.336 [2024-04-24 20:53:00.735889] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.336 [2024-04-24 20:53:00.735895] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.336 [2024-04-24 20:53:00.735899] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.336 [2024-04-24 20:53:00.735902] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7e20) on tqpair=0x57fcb0 00:21:36.336 [2024-04-24 20:53:00.735909] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.336 [2024-04-24 20:53:00.735913] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.336 [2024-04-24 20:53:00.735916] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x57fcb0) 00:21:36.336 [2024-04-24 20:53:00.735923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.336 [2024-04-24 20:53:00.735935] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7e20, cid 3, qid 0 00:21:36.336 [2024-04-24 20:53:00.736126] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.336 [2024-04-24 20:53:00.736132] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.336 [2024-04-24 20:53:00.736135] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.336 [2024-04-24 20:53:00.736141] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7e20) on tqpair=0x57fcb0 00:21:36.336 [2024-04-24 20:53:00.736146] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:36.336 [2024-04-24 20:53:00.736150] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:36.336 [2024-04-24 20:53:00.736159] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.336 [2024-04-24 20:53:00.736163] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.336 [2024-04-24 20:53:00.736166] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x57fcb0) 00:21:36.336 [2024-04-24 20:53:00.736173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.336 [2024-04-24 20:53:00.736183] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7e20, cid 3, qid 0 00:21:36.336 [2024-04-24 20:53:00.736385] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.336 [2024-04-24 20:53:00.736391] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.336 [2024-04-24 20:53:00.736395] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.336 [2024-04-24 20:53:00.736398] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7e20) on tqpair=0x57fcb0 00:21:36.336 [2024-04-24 20:53:00.736408] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.336 [2024-04-24 20:53:00.736412] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.736415] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x57fcb0) 00:21:36.337 [2024-04-24 20:53:00.736422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.337 [2024-04-24 20:53:00.736431] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7e20, cid 3, qid 0 00:21:36.337 [2024-04-24 20:53:00.736603] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.337 [2024-04-24 20:53:00.736609] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.337 [2024-04-24 20:53:00.736612] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.736616] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7e20) on tqpair=0x57fcb0 00:21:36.337 [2024-04-24 20:53:00.736626] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.736629] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.736633] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x57fcb0) 00:21:36.337 [2024-04-24 20:53:00.736639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.337 [2024-04-24 20:53:00.736649] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7e20, cid 3, qid 0 00:21:36.337 [2024-04-24 20:53:00.736825] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.337 [2024-04-24 20:53:00.736831] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.337 [2024-04-24 20:53:00.736835] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.736838] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7e20) on tqpair=0x57fcb0 00:21:36.337 [2024-04-24 20:53:00.736848] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.736852] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.736855] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x57fcb0) 00:21:36.337 [2024-04-24 20:53:00.736861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.337 [2024-04-24 20:53:00.736871] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7e20, cid 3, qid 0 00:21:36.337 [2024-04-24 20:53:00.737085] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.337 [2024-04-24 20:53:00.737093] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.337 [2024-04-24 20:53:00.737097] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.737100] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7e20) on tqpair=0x57fcb0 00:21:36.337 [2024-04-24 20:53:00.737110] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.737113] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.737117] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x57fcb0) 00:21:36.337 [2024-04-24 20:53:00.737123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.337 [2024-04-24 20:53:00.737133] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7e20, cid 3, qid 0 00:21:36.337 [2024-04-24 20:53:00.737308] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.337 [2024-04-24 20:53:00.737315] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.337 [2024-04-24 20:53:00.737318] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.737322] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7e20) on tqpair=0x57fcb0 00:21:36.337 [2024-04-24 20:53:00.737331] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.737335] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.737338] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x57fcb0) 00:21:36.337 [2024-04-24 20:53:00.737345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.337 [2024-04-24 20:53:00.737354] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7e20, cid 3, qid 0 00:21:36.337 [2024-04-24 20:53:00.737556] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.337 [2024-04-24 20:53:00.737562] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.337 [2024-04-24 20:53:00.737566] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.737569] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7e20) on tqpair=0x57fcb0 00:21:36.337 [2024-04-24 20:53:00.737579] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.737582] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.737586] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x57fcb0) 00:21:36.337 [2024-04-24 20:53:00.737592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.337 [2024-04-24 20:53:00.737602] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e7e20, cid 3, qid 0 00:21:36.337 [2024-04-24 20:53:00.741733] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:36.337 [2024-04-24 20:53:00.741741] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:36.337 [2024-04-24 20:53:00.741744] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:36.337 [2024-04-24 20:53:00.741748] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x5e7e20) on tqpair=0x57fcb0 00:21:36.337 [2024-04-24 20:53:00.741756] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:21:36.337 0 Kelvin (-273 Celsius) 00:21:36.337 Available Spare: 0% 00:21:36.337 Available Spare Threshold: 0% 00:21:36.337 Life Percentage Used: 0% 00:21:36.337 Data Units Read: 0 00:21:36.337 Data Units Written: 0 00:21:36.337 Host Read Commands: 0 00:21:36.337 Host Write Commands: 0 00:21:36.337 Controller Busy Time: 0 minutes 00:21:36.337 Power Cycles: 0 00:21:36.337 Power On Hours: 0 hours 00:21:36.337 Unsafe Shutdowns: 0 00:21:36.337 Unrecoverable Media Errors: 0 00:21:36.337 Lifetime Error Log Entries: 0 00:21:36.337 Warning Temperature Time: 0 minutes 00:21:36.337 Critical Temperature Time: 0 minutes 00:21:36.337 00:21:36.337 Number of Queues 00:21:36.337 ================ 00:21:36.337 Number of I/O Submission Queues: 127 00:21:36.337 Number of I/O Completion Queues: 127 00:21:36.337 00:21:36.337 Active Namespaces 00:21:36.337 ================= 00:21:36.337 Namespace ID:1 00:21:36.337 Error Recovery Timeout: Unlimited 00:21:36.337 Command Set Identifier: NVM (00h) 00:21:36.337 Deallocate: Supported 00:21:36.337 Deallocated/Unwritten Error: Not Supported 00:21:36.337 Deallocated Read Value: Unknown 00:21:36.337 Deallocate in Write Zeroes: Not Supported 00:21:36.337 Deallocated Guard Field: 0xFFFF 00:21:36.337 Flush: Supported 00:21:36.337 Reservation: Supported 00:21:36.337 Namespace Sharing Capabilities: Multiple Controllers 00:21:36.337 Size (in LBAs): 131072 (0GiB) 00:21:36.337 Capacity (in LBAs): 131072 (0GiB) 00:21:36.337 Utilization (in LBAs): 131072 (0GiB) 00:21:36.337 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:36.337 EUI64: ABCDEF0123456789 00:21:36.337 UUID: 2aa8971a-1889-454d-9ab6-18a1adaac6c8 00:21:36.337 Thin Provisioning: Not Supported 00:21:36.337 Per-NS Atomic Units: Yes 00:21:36.337 Atomic Boundary Size (Normal): 0 00:21:36.337 Atomic Boundary Size (PFail): 0 00:21:36.337 Atomic Boundary Offset: 0 00:21:36.337 Maximum Single Source Range Length: 65535 00:21:36.337 Maximum Copy Length: 65535 00:21:36.337 Maximum Source Range Count: 1 00:21:36.337 NGUID/EUI64 Never Reused: No 00:21:36.337 Namespace Write Protected: No 00:21:36.337 Number of LBA Formats: 1 00:21:36.337 Current LBA Format: LBA Format #00 00:21:36.337 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:36.337 00:21:36.337 20:53:00 -- host/identify.sh@51 -- # sync 00:21:36.337 20:53:00 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.337 20:53:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:36.337 20:53:00 -- common/autotest_common.sh@10 -- # set +x 00:21:36.337 20:53:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:36.337 20:53:00 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:36.337 20:53:00 -- host/identify.sh@56 -- # nvmftestfini 00:21:36.337 20:53:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:36.337 20:53:00 -- nvmf/common.sh@117 -- # sync 00:21:36.337 20:53:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.337 20:53:00 -- nvmf/common.sh@120 -- # set +e 00:21:36.337 20:53:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.337 20:53:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.337 rmmod nvme_tcp 00:21:36.337 rmmod nvme_fabrics 00:21:36.337 rmmod nvme_keyring 00:21:36.337 20:53:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.337 20:53:00 -- nvmf/common.sh@124 -- # set -e 00:21:36.337 20:53:00 -- nvmf/common.sh@125 -- # return 0 00:21:36.337 20:53:00 -- nvmf/common.sh@478 -- # '[' -n 2852726 ']' 00:21:36.337 20:53:00 -- nvmf/common.sh@479 -- # killprocess 2852726 00:21:36.337 20:53:00 -- common/autotest_common.sh@936 -- # '[' -z 2852726 ']' 00:21:36.337 20:53:00 -- common/autotest_common.sh@940 -- # kill -0 2852726 00:21:36.337 20:53:00 -- common/autotest_common.sh@941 -- # uname 00:21:36.337 20:53:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:36.337 20:53:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2852726 00:21:36.337 20:53:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:36.337 20:53:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:36.337 20:53:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2852726' 00:21:36.337 killing process with pid 2852726 00:21:36.337 20:53:00 -- common/autotest_common.sh@955 -- # kill 2852726 00:21:36.337 [2024-04-24 20:53:00.890535] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:36.337 20:53:00 -- common/autotest_common.sh@960 -- # wait 2852726 00:21:36.599 20:53:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:36.599 20:53:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:36.599 20:53:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:36.599 20:53:01 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.599 20:53:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:36.599 20:53:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.599 20:53:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.599 20:53:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.512 20:53:03 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:38.512 00:21:38.512 real 0m11.344s 00:21:38.512 user 0m8.460s 00:21:38.512 sys 0m5.893s 00:21:38.512 20:53:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:38.512 20:53:03 -- common/autotest_common.sh@10 -- # set +x 00:21:38.512 ************************************ 00:21:38.512 END TEST nvmf_identify 00:21:38.512 ************************************ 00:21:38.773 20:53:03 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:38.773 20:53:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:38.773 20:53:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:38.773 20:53:03 -- common/autotest_common.sh@10 -- # set +x 00:21:38.773 ************************************ 00:21:38.773 START TEST nvmf_perf 00:21:38.773 ************************************ 00:21:38.773 20:53:03 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:38.773 * Looking for test storage... 00:21:38.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:38.773 20:53:03 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.033 20:53:03 -- nvmf/common.sh@7 -- # uname -s 00:21:39.033 20:53:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.033 20:53:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.033 20:53:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.033 20:53:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.033 20:53:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.033 20:53:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.033 20:53:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.033 20:53:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.033 20:53:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.033 20:53:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.033 20:53:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:39.033 20:53:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:21:39.033 20:53:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.033 20:53:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.033 20:53:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.033 20:53:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.033 20:53:03 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.033 20:53:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.033 20:53:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.033 20:53:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.034 20:53:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.034 20:53:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.034 20:53:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.034 20:53:03 -- paths/export.sh@5 -- # export PATH 00:21:39.034 20:53:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.034 20:53:03 -- nvmf/common.sh@47 -- # : 0 00:21:39.034 20:53:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.034 20:53:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.034 20:53:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.034 20:53:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.034 20:53:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.034 20:53:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.034 20:53:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.034 20:53:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.034 20:53:03 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:39.034 20:53:03 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:39.034 20:53:03 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:39.034 20:53:03 -- host/perf.sh@17 -- # nvmftestinit 00:21:39.034 20:53:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:39.034 20:53:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.034 20:53:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:39.034 20:53:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:39.034 20:53:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:39.034 20:53:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.034 20:53:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:39.034 20:53:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.034 20:53:03 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:21:39.034 20:53:03 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:39.034 20:53:03 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.034 20:53:03 -- common/autotest_common.sh@10 -- # set +x 00:21:45.613 20:53:10 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:45.614 20:53:10 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:45.614 20:53:10 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:45.614 20:53:10 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:45.614 20:53:10 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:45.614 20:53:10 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:45.614 20:53:10 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:45.614 20:53:10 -- nvmf/common.sh@295 -- # net_devs=() 00:21:45.614 20:53:10 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:45.614 20:53:10 -- nvmf/common.sh@296 -- # e810=() 00:21:45.614 20:53:10 -- nvmf/common.sh@296 -- # local -ga e810 00:21:45.614 20:53:10 -- nvmf/common.sh@297 -- # x722=() 00:21:45.614 20:53:10 -- nvmf/common.sh@297 -- # local -ga x722 00:21:45.614 20:53:10 -- nvmf/common.sh@298 -- # mlx=() 00:21:45.614 20:53:10 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:45.614 20:53:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.614 20:53:10 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.614 20:53:10 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.614 20:53:10 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.614 20:53:10 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.614 20:53:10 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.614 20:53:10 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.614 20:53:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.614 20:53:10 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.614 20:53:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.614 20:53:10 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.614 20:53:10 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:45.614 20:53:10 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:45.614 20:53:10 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:45.614 20:53:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.614 20:53:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:45.614 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:45.614 20:53:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:45.614 20:53:10 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:45.614 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:45.614 20:53:10 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:45.614 20:53:10 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.614 20:53:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.614 20:53:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:45.614 20:53:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.614 20:53:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:45.614 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:45.614 20:53:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.614 20:53:10 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:45.614 20:53:10 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.614 20:53:10 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:45.614 20:53:10 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.614 20:53:10 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:45.614 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:45.614 20:53:10 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.614 20:53:10 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:45.614 20:53:10 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:45.614 20:53:10 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:45.614 20:53:10 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:45.614 20:53:10 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:45.614 20:53:10 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.614 20:53:10 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:45.614 20:53:10 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:45.614 20:53:10 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:45.614 20:53:10 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:45.614 20:53:10 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:45.614 20:53:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:45.614 20:53:10 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.614 20:53:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:45.614 20:53:10 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:45.614 20:53:10 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:45.614 20:53:10 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:45.875 20:53:10 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:45.875 20:53:10 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:45.875 20:53:10 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:45.875 20:53:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:45.875 20:53:10 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:45.875 20:53:10 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:45.875 20:53:10 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:45.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:21:45.875 00:21:45.875 --- 10.0.0.2 ping statistics --- 00:21:45.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.875 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:21:45.875 20:53:10 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:45.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:21:45.875 00:21:45.875 --- 10.0.0.1 ping statistics --- 00:21:45.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.875 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:45.875 20:53:10 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.875 20:53:10 -- nvmf/common.sh@411 -- # return 0 00:21:45.875 20:53:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:45.875 20:53:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.875 20:53:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:45.875 20:53:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:45.875 20:53:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.875 20:53:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:45.875 20:53:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:46.135 20:53:10 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:46.135 20:53:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:46.135 20:53:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:46.135 20:53:10 -- common/autotest_common.sh@10 -- # set +x 00:21:46.135 20:53:10 -- nvmf/common.sh@470 -- # nvmfpid=2857084 00:21:46.135 20:53:10 -- nvmf/common.sh@471 -- # waitforlisten 2857084 00:21:46.135 20:53:10 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:46.135 20:53:10 -- common/autotest_common.sh@817 -- # '[' -z 2857084 ']' 00:21:46.135 20:53:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.135 20:53:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:46.135 20:53:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.135 20:53:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:46.135 20:53:10 -- common/autotest_common.sh@10 -- # set +x 00:21:46.135 [2024-04-24 20:53:10.610447] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:21:46.135 [2024-04-24 20:53:10.610494] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.135 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.135 [2024-04-24 20:53:10.693858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.135 [2024-04-24 20:53:10.758542] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.135 [2024-04-24 20:53:10.758581] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.136 [2024-04-24 20:53:10.758590] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.136 [2024-04-24 20:53:10.758597] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.136 [2024-04-24 20:53:10.758604] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.136 [2024-04-24 20:53:10.758918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.136 [2024-04-24 20:53:10.759024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.136 [2024-04-24 20:53:10.759181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.136 [2024-04-24 20:53:10.759182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.077 20:53:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:47.077 20:53:11 -- common/autotest_common.sh@850 -- # return 0 00:21:47.077 20:53:11 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:47.077 20:53:11 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:47.078 20:53:11 -- common/autotest_common.sh@10 -- # set +x 00:21:47.078 20:53:11 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.078 20:53:11 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:47.078 20:53:11 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:47.649 20:53:12 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:47.649 20:53:12 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:47.649 20:53:12 -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:21:47.649 20:53:12 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:47.911 20:53:12 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:47.911 20:53:12 -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:21:47.911 20:53:12 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:47.911 20:53:12 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:47.911 20:53:12 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:48.172 [2024-04-24 20:53:12.662322] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.172 20:53:12 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:48.434 20:53:12 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:48.434 20:53:12 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:48.695 20:53:13 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:48.695 20:53:13 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:48.695 20:53:13 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:48.957 [2024-04-24 20:53:13.517516] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.957 20:53:13 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:49.218 20:53:13 -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:21:49.218 20:53:13 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:21:49.218 20:53:13 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:49.218 20:53:13 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:21:50.605 Initializing NVMe Controllers 00:21:50.605 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:21:50.605 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:21:50.605 Initialization complete. Launching workers. 00:21:50.605 ======================================================== 00:21:50.605 Latency(us) 00:21:50.605 Device Information : IOPS MiB/s Average min max 00:21:50.605 PCIE (0000:65:00.0) NSID 1 from core 0: 80962.06 316.26 394.70 13.21 5194.57 00:21:50.605 ======================================================== 00:21:50.605 Total : 80962.06 316.26 394.70 13.21 5194.57 00:21:50.605 00:21:50.605 20:53:14 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:50.605 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.991 Initializing NVMe Controllers 00:21:51.991 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:51.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:51.991 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:51.991 Initialization complete. Launching workers. 00:21:51.991 ======================================================== 00:21:51.991 Latency(us) 00:21:51.991 Device Information : IOPS MiB/s Average min max 00:21:51.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 52.86 0.21 19129.94 133.85 46006.75 00:21:51.991 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 71.81 0.28 14590.20 5986.03 47888.18 00:21:51.991 ======================================================== 00:21:51.991 Total : 124.67 0.49 16515.05 133.85 47888.18 00:21:51.991 00:21:51.991 20:53:16 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:51.991 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.933 Initializing NVMe Controllers 00:21:52.933 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:52.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:52.933 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:52.933 Initialization complete. Launching workers. 00:21:52.933 ======================================================== 00:21:52.933 Latency(us) 00:21:52.933 Device Information : IOPS MiB/s Average min max 00:21:52.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8637.19 33.74 3705.28 526.21 7324.10 00:21:52.933 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3829.01 14.96 8385.72 5205.44 15876.95 00:21:52.933 ======================================================== 00:21:52.933 Total : 12466.19 48.70 5142.88 526.21 15876.95 00:21:52.933 00:21:52.933 20:53:17 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:52.933 20:53:17 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:52.934 20:53:17 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:52.934 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.480 Initializing NVMe Controllers 00:21:55.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:55.480 Controller IO queue size 128, less than required. 00:21:55.480 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.480 Controller IO queue size 128, less than required. 00:21:55.480 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:55.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:55.480 Initialization complete. Launching workers. 00:21:55.480 ======================================================== 00:21:55.480 Latency(us) 00:21:55.480 Device Information : IOPS MiB/s Average min max 00:21:55.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1815.97 453.99 72043.29 41338.87 124836.53 00:21:55.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 568.49 142.12 232729.59 64234.02 372621.41 00:21:55.481 ======================================================== 00:21:55.481 Total : 2384.46 596.11 110353.27 41338.87 372621.41 00:21:55.481 00:21:55.481 20:53:19 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:55.481 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.740 No valid NVMe controllers or AIO or URING devices found 00:21:55.740 Initializing NVMe Controllers 00:21:55.740 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:55.740 Controller IO queue size 128, less than required. 00:21:55.740 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.740 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:55.740 Controller IO queue size 128, less than required. 00:21:55.740 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.740 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:55.740 WARNING: Some requested NVMe devices were skipped 00:21:55.740 20:53:20 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:55.740 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.333 Initializing NVMe Controllers 00:21:58.333 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:58.333 Controller IO queue size 128, less than required. 00:21:58.333 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:58.334 Controller IO queue size 128, less than required. 00:21:58.334 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:58.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:58.334 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:58.334 Initialization complete. Launching workers. 00:21:58.334 00:21:58.334 ==================== 00:21:58.334 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:58.334 TCP transport: 00:21:58.334 polls: 27566 00:21:58.334 idle_polls: 15882 00:21:58.334 sock_completions: 11684 00:21:58.334 nvme_completions: 5603 00:21:58.334 submitted_requests: 8384 00:21:58.334 queued_requests: 1 00:21:58.334 00:21:58.334 ==================== 00:21:58.334 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:58.334 TCP transport: 00:21:58.334 polls: 28289 00:21:58.334 idle_polls: 12156 00:21:58.334 sock_completions: 16133 00:21:58.334 nvme_completions: 6095 00:21:58.334 submitted_requests: 9116 00:21:58.334 queued_requests: 1 00:21:58.334 ======================================================== 00:21:58.334 Latency(us) 00:21:58.334 Device Information : IOPS MiB/s Average min max 00:21:58.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1398.12 349.53 92954.67 42103.28 156818.91 00:21:58.334 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1520.91 380.23 85229.94 40655.33 129101.04 00:21:58.334 ======================================================== 00:21:58.334 Total : 2919.02 729.76 88929.83 40655.33 156818.91 00:21:58.334 00:21:58.334 20:53:22 -- host/perf.sh@66 -- # sync 00:21:58.334 20:53:22 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.334 20:53:22 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:58.334 20:53:22 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:58.334 20:53:22 -- host/perf.sh@114 -- # nvmftestfini 00:21:58.334 20:53:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:58.334 20:53:22 -- nvmf/common.sh@117 -- # sync 00:21:58.334 20:53:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:58.334 20:53:22 -- nvmf/common.sh@120 -- # set +e 00:21:58.334 20:53:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:58.334 20:53:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:58.334 rmmod nvme_tcp 00:21:58.595 rmmod nvme_fabrics 00:21:58.595 rmmod nvme_keyring 00:21:58.595 20:53:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:58.595 20:53:23 -- nvmf/common.sh@124 -- # set -e 00:21:58.595 20:53:23 -- nvmf/common.sh@125 -- # return 0 00:21:58.595 20:53:23 -- nvmf/common.sh@478 -- # '[' -n 2857084 ']' 00:21:58.595 20:53:23 -- nvmf/common.sh@479 -- # killprocess 2857084 00:21:58.595 20:53:23 -- common/autotest_common.sh@936 -- # '[' -z 2857084 ']' 00:21:58.595 20:53:23 -- common/autotest_common.sh@940 -- # kill -0 2857084 00:21:58.595 20:53:23 -- common/autotest_common.sh@941 -- # uname 00:21:58.595 20:53:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:58.595 20:53:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2857084 00:21:58.595 20:53:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:58.595 20:53:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:58.595 20:53:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2857084' 00:21:58.595 killing process with pid 2857084 00:21:58.595 20:53:23 -- common/autotest_common.sh@955 -- # kill 2857084 00:21:58.595 20:53:23 -- common/autotest_common.sh@960 -- # wait 2857084 00:22:00.506 20:53:25 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:00.506 20:53:25 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:00.506 20:53:25 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:00.506 20:53:25 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.506 20:53:25 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:00.506 20:53:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.506 20:53:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.506 20:53:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.051 20:53:27 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:03.051 00:22:03.051 real 0m23.832s 00:22:03.051 user 0m59.156s 00:22:03.051 sys 0m7.830s 00:22:03.051 20:53:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:03.051 20:53:27 -- common/autotest_common.sh@10 -- # set +x 00:22:03.051 ************************************ 00:22:03.051 END TEST nvmf_perf 00:22:03.051 ************************************ 00:22:03.051 20:53:27 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:03.051 20:53:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:03.051 20:53:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:03.051 20:53:27 -- common/autotest_common.sh@10 -- # set +x 00:22:03.051 ************************************ 00:22:03.051 START TEST nvmf_fio_host 00:22:03.051 ************************************ 00:22:03.051 20:53:27 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:03.051 * Looking for test storage... 00:22:03.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:03.051 20:53:27 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.051 20:53:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.051 20:53:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.051 20:53:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.051 20:53:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.051 20:53:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.051 20:53:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.051 20:53:27 -- paths/export.sh@5 -- # export PATH 00:22:03.051 20:53:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.051 20:53:27 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:03.051 20:53:27 -- nvmf/common.sh@7 -- # uname -s 00:22:03.051 20:53:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:03.051 20:53:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:03.051 20:53:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:03.051 20:53:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:03.051 20:53:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:03.051 20:53:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:03.051 20:53:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:03.051 20:53:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:03.051 20:53:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:03.051 20:53:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:03.051 20:53:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:03.051 20:53:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:03.051 20:53:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:03.051 20:53:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:03.051 20:53:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:03.051 20:53:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:03.051 20:53:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:03.051 20:53:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:03.051 20:53:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:03.051 20:53:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:03.051 20:53:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.051 20:53:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.051 20:53:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.051 20:53:27 -- paths/export.sh@5 -- # export PATH 00:22:03.051 20:53:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:03.051 20:53:27 -- nvmf/common.sh@47 -- # : 0 00:22:03.051 20:53:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:03.051 20:53:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:03.051 20:53:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:03.051 20:53:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:03.051 20:53:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:03.051 20:53:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:03.051 20:53:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:03.051 20:53:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:03.051 20:53:27 -- host/fio.sh@12 -- # nvmftestinit 00:22:03.051 20:53:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:03.051 20:53:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.051 20:53:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:03.051 20:53:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:03.051 20:53:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:03.051 20:53:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.051 20:53:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.051 20:53:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.051 20:53:27 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:03.051 20:53:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:03.051 20:53:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:03.051 20:53:27 -- common/autotest_common.sh@10 -- # set +x 00:22:09.638 20:53:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:09.638 20:53:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:09.638 20:53:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:09.638 20:53:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:09.638 20:53:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:09.638 20:53:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:09.638 20:53:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:09.638 20:53:34 -- nvmf/common.sh@295 -- # net_devs=() 00:22:09.638 20:53:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:09.638 20:53:34 -- nvmf/common.sh@296 -- # e810=() 00:22:09.638 20:53:34 -- nvmf/common.sh@296 -- # local -ga e810 00:22:09.638 20:53:34 -- nvmf/common.sh@297 -- # x722=() 00:22:09.638 20:53:34 -- nvmf/common.sh@297 -- # local -ga x722 00:22:09.638 20:53:34 -- nvmf/common.sh@298 -- # mlx=() 00:22:09.638 20:53:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:09.638 20:53:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.638 20:53:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.638 20:53:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.638 20:53:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.638 20:53:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.638 20:53:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.638 20:53:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.638 20:53:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.638 20:53:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.638 20:53:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.638 20:53:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.638 20:53:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:09.638 20:53:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:09.638 20:53:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:09.638 20:53:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:09.638 20:53:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:09.638 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:09.638 20:53:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:09.638 20:53:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:09.638 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:09.638 20:53:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:09.638 20:53:34 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:09.638 20:53:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:09.639 20:53:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.639 20:53:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:09.639 20:53:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.639 20:53:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:09.639 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:09.639 20:53:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.639 20:53:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:09.639 20:53:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.639 20:53:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:09.639 20:53:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.639 20:53:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:09.639 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:09.639 20:53:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.639 20:53:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:09.639 20:53:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:09.639 20:53:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:09.639 20:53:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:09.639 20:53:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:09.639 20:53:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.639 20:53:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.639 20:53:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.639 20:53:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:09.639 20:53:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.639 20:53:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.639 20:53:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:09.639 20:53:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.639 20:53:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.639 20:53:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:09.639 20:53:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:09.639 20:53:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.639 20:53:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.639 20:53:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.639 20:53:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.900 20:53:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:09.900 20:53:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.900 20:53:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.900 20:53:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.900 20:53:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:09.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:22:09.900 00:22:09.900 --- 10.0.0.2 ping statistics --- 00:22:09.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.900 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:22:09.900 20:53:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:22:09.900 00:22:09.900 --- 10.0.0.1 ping statistics --- 00:22:09.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.900 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:22:09.900 20:53:34 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.900 20:53:34 -- nvmf/common.sh@411 -- # return 0 00:22:09.900 20:53:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:09.900 20:53:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.900 20:53:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:09.900 20:53:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:09.900 20:53:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.900 20:53:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:09.900 20:53:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:09.900 20:53:34 -- host/fio.sh@14 -- # [[ y != y ]] 00:22:09.900 20:53:34 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:22:09.900 20:53:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:09.900 20:53:34 -- common/autotest_common.sh@10 -- # set +x 00:22:09.900 20:53:34 -- host/fio.sh@22 -- # nvmfpid=2864142 00:22:09.900 20:53:34 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:09.900 20:53:34 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:09.900 20:53:34 -- host/fio.sh@26 -- # waitforlisten 2864142 00:22:09.900 20:53:34 -- common/autotest_common.sh@817 -- # '[' -z 2864142 ']' 00:22:09.900 20:53:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.900 20:53:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:09.900 20:53:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.900 20:53:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:09.900 20:53:34 -- common/autotest_common.sh@10 -- # set +x 00:22:09.900 [2024-04-24 20:53:34.510369] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:22:09.900 [2024-04-24 20:53:34.510417] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.161 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.161 [2024-04-24 20:53:34.581129] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:10.161 [2024-04-24 20:53:34.644404] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.161 [2024-04-24 20:53:34.644440] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.161 [2024-04-24 20:53:34.644449] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.161 [2024-04-24 20:53:34.644456] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.161 [2024-04-24 20:53:34.644464] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.161 [2024-04-24 20:53:34.644573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.161 [2024-04-24 20:53:34.644712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.161 [2024-04-24 20:53:34.644875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:10.161 [2024-04-24 20:53:34.644875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:10.161 20:53:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:10.161 20:53:34 -- common/autotest_common.sh@850 -- # return 0 00:22:10.161 20:53:34 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:10.161 20:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.161 20:53:34 -- common/autotest_common.sh@10 -- # set +x 00:22:10.161 [2024-04-24 20:53:34.748427] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:10.161 20:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.161 20:53:34 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:22:10.161 20:53:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:10.161 20:53:34 -- common/autotest_common.sh@10 -- # set +x 00:22:10.161 20:53:34 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:10.161 20:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.161 20:53:34 -- common/autotest_common.sh@10 -- # set +x 00:22:10.422 Malloc1 00:22:10.422 20:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.422 20:53:34 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:10.422 20:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.422 20:53:34 -- common/autotest_common.sh@10 -- # set +x 00:22:10.422 20:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.422 20:53:34 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:10.422 20:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.422 20:53:34 -- common/autotest_common.sh@10 -- # set +x 00:22:10.422 20:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.422 20:53:34 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:10.422 20:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.422 20:53:34 -- common/autotest_common.sh@10 -- # set +x 00:22:10.422 [2024-04-24 20:53:34.843958] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:10.422 20:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.422 20:53:34 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:10.422 20:53:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:10.422 20:53:34 -- common/autotest_common.sh@10 -- # set +x 00:22:10.422 20:53:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:10.422 20:53:34 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:10.422 20:53:34 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:10.422 20:53:34 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:10.422 20:53:34 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:10.422 20:53:34 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:10.422 20:53:34 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:10.422 20:53:34 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:10.422 20:53:34 -- common/autotest_common.sh@1327 -- # shift 00:22:10.422 20:53:34 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:10.422 20:53:34 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.422 20:53:34 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:10.422 20:53:34 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:10.422 20:53:34 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:10.422 20:53:34 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:10.422 20:53:34 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:10.422 20:53:34 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.422 20:53:34 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:10.422 20:53:34 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:10.422 20:53:34 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:10.422 20:53:34 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:10.422 20:53:34 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:10.422 20:53:34 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:10.423 20:53:34 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:10.683 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:10.683 fio-3.35 00:22:10.683 Starting 1 thread 00:22:10.683 EAL: No free 2048 kB hugepages reported on node 1 00:22:13.230 00:22:13.230 test: (groupid=0, jobs=1): err= 0: pid=2864515: Wed Apr 24 20:53:37 2024 00:22:13.230 read: IOPS=9511, BW=37.2MiB/s (39.0MB/s)(74.5MiB/2006msec) 00:22:13.230 slat (usec): min=2, max=282, avg= 2.20, stdev= 2.87 00:22:13.230 clat (usec): min=3730, max=12736, avg=7428.52, stdev=528.81 00:22:13.230 lat (usec): min=3762, max=12738, avg=7430.72, stdev=528.60 00:22:13.230 clat percentiles (usec): 00:22:13.230 | 1.00th=[ 6259], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 7046], 00:22:13.230 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:22:13.230 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:22:13.230 | 99.00th=[ 8586], 99.50th=[ 8848], 99.90th=[11207], 99.95th=[11994], 00:22:13.230 | 99.99th=[12649] 00:22:13.230 bw ( KiB/s): min=37000, max=38720, per=99.95%, avg=38028.00, stdev=733.81, samples=4 00:22:13.230 iops : min= 9250, max= 9680, avg=9507.00, stdev=183.45, samples=4 00:22:13.230 write: IOPS=9522, BW=37.2MiB/s (39.0MB/s)(74.6MiB/2006msec); 0 zone resets 00:22:13.230 slat (usec): min=2, max=263, avg= 2.28, stdev= 2.10 00:22:13.230 clat (usec): min=2894, max=12131, avg=5952.86, stdev=453.29 00:22:13.230 lat (usec): min=2912, max=12133, avg=5955.14, stdev=453.14 00:22:13.230 clat percentiles (usec): 00:22:13.230 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:22:13.230 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:22:13.230 | 70.00th=[ 6194], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:22:13.230 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[10028], 99.95th=[10814], 00:22:13.230 | 99.99th=[12125] 00:22:13.230 bw ( KiB/s): min=37888, max=38272, per=99.98%, avg=38084.00, stdev=217.18, samples=4 00:22:13.230 iops : min= 9472, max= 9568, avg=9521.00, stdev=54.30, samples=4 00:22:13.230 lat (msec) : 4=0.06%, 10=99.81%, 20=0.13% 00:22:13.230 cpu : usr=71.42%, sys=27.33%, ctx=52, majf=0, minf=4 00:22:13.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:13.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:13.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:13.230 issued rwts: total=19081,19102,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:13.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:13.230 00:22:13.230 Run status group 0 (all jobs): 00:22:13.230 READ: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=74.5MiB (78.2MB), run=2006-2006msec 00:22:13.230 WRITE: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=74.6MiB (78.2MB), run=2006-2006msec 00:22:13.230 20:53:37 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:13.230 20:53:37 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:13.230 20:53:37 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:22:13.230 20:53:37 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:13.230 20:53:37 -- common/autotest_common.sh@1325 -- # local sanitizers 00:22:13.230 20:53:37 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:13.230 20:53:37 -- common/autotest_common.sh@1327 -- # shift 00:22:13.230 20:53:37 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:22:13.230 20:53:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:13.230 20:53:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:13.230 20:53:37 -- common/autotest_common.sh@1331 -- # grep libasan 00:22:13.230 20:53:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:13.230 20:53:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:13.230 20:53:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:13.230 20:53:37 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:22:13.230 20:53:37 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:13.230 20:53:37 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:22:13.230 20:53:37 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:22:13.230 20:53:37 -- common/autotest_common.sh@1331 -- # asan_lib= 00:22:13.230 20:53:37 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:22:13.230 20:53:37 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:13.230 20:53:37 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:13.491 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:13.491 fio-3.35 00:22:13.491 Starting 1 thread 00:22:13.491 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.042 00:22:16.042 test: (groupid=0, jobs=1): err= 0: pid=2865166: Wed Apr 24 20:53:40 2024 00:22:16.042 read: IOPS=9000, BW=141MiB/s (147MB/s)(282MiB/2004msec) 00:22:16.042 slat (usec): min=3, max=107, avg= 3.63, stdev= 1.58 00:22:16.042 clat (usec): min=1845, max=53487, avg=8698.47, stdev=3974.27 00:22:16.042 lat (usec): min=1848, max=53490, avg=8702.10, stdev=3974.34 00:22:16.042 clat percentiles (usec): 00:22:16.042 | 1.00th=[ 4490], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6587], 00:22:16.042 | 30.00th=[ 7177], 40.00th=[ 7832], 50.00th=[ 8356], 60.00th=[ 8979], 00:22:16.042 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11076], 95.00th=[11994], 00:22:16.042 | 99.00th=[14484], 99.50th=[47973], 99.90th=[52691], 99.95th=[53216], 00:22:16.042 | 99.99th=[53216] 00:22:16.042 bw ( KiB/s): min=66720, max=81149, per=49.40%, avg=71143.25, stdev=6830.83, samples=4 00:22:16.042 iops : min= 4170, max= 5071, avg=4446.25, stdev=426.53, samples=4 00:22:16.042 write: IOPS=5267, BW=82.3MiB/s (86.3MB/s)(145MiB/1764msec); 0 zone resets 00:22:16.042 slat (usec): min=40, max=322, avg=41.11, stdev= 6.93 00:22:16.042 clat (usec): min=2528, max=16073, avg=9566.98, stdev=1595.47 00:22:16.042 lat (usec): min=2568, max=16114, avg=9608.09, stdev=1596.39 00:22:16.042 clat percentiles (usec): 00:22:16.042 | 1.00th=[ 6587], 5.00th=[ 7242], 10.00th=[ 7635], 20.00th=[ 8291], 00:22:16.042 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:22:16.043 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11600], 95.00th=[12518], 00:22:16.043 | 99.00th=[14091], 99.50th=[14615], 99.90th=[15926], 99.95th=[15926], 00:22:16.043 | 99.99th=[16057] 00:22:16.043 bw ( KiB/s): min=69312, max=84534, per=87.75%, avg=73957.50, stdev=7139.95, samples=4 00:22:16.043 iops : min= 4332, max= 5283, avg=4622.25, stdev=446.06, samples=4 00:22:16.043 lat (msec) : 2=0.01%, 4=0.37%, 10=73.02%, 20=26.14%, 50=0.23% 00:22:16.043 lat (msec) : 100=0.24% 00:22:16.043 cpu : usr=84.67%, sys=13.53%, ctx=32, majf=0, minf=6 00:22:16.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:16.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:16.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:16.043 issued rwts: total=18037,9292,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:16.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:16.043 00:22:16.043 Run status group 0 (all jobs): 00:22:16.043 READ: bw=141MiB/s (147MB/s), 141MiB/s-141MiB/s (147MB/s-147MB/s), io=282MiB (296MB), run=2004-2004msec 00:22:16.043 WRITE: bw=82.3MiB/s (86.3MB/s), 82.3MiB/s-82.3MiB/s (86.3MB/s-86.3MB/s), io=145MiB (152MB), run=1764-1764msec 00:22:16.043 20:53:40 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:16.043 20:53:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:16.043 20:53:40 -- common/autotest_common.sh@10 -- # set +x 00:22:16.043 20:53:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:16.043 20:53:40 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:22:16.043 20:53:40 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:22:16.043 20:53:40 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:22:16.043 20:53:40 -- host/fio.sh@84 -- # nvmftestfini 00:22:16.043 20:53:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:16.043 20:53:40 -- nvmf/common.sh@117 -- # sync 00:22:16.043 20:53:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:16.043 20:53:40 -- nvmf/common.sh@120 -- # set +e 00:22:16.043 20:53:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:16.043 20:53:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:16.043 rmmod nvme_tcp 00:22:16.043 rmmod nvme_fabrics 00:22:16.043 rmmod nvme_keyring 00:22:16.043 20:53:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:16.043 20:53:40 -- nvmf/common.sh@124 -- # set -e 00:22:16.043 20:53:40 -- nvmf/common.sh@125 -- # return 0 00:22:16.043 20:53:40 -- nvmf/common.sh@478 -- # '[' -n 2864142 ']' 00:22:16.043 20:53:40 -- nvmf/common.sh@479 -- # killprocess 2864142 00:22:16.043 20:53:40 -- common/autotest_common.sh@936 -- # '[' -z 2864142 ']' 00:22:16.043 20:53:40 -- common/autotest_common.sh@940 -- # kill -0 2864142 00:22:16.043 20:53:40 -- common/autotest_common.sh@941 -- # uname 00:22:16.043 20:53:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:16.043 20:53:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2864142 00:22:16.043 20:53:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:16.043 20:53:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:16.043 20:53:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2864142' 00:22:16.043 killing process with pid 2864142 00:22:16.043 20:53:40 -- common/autotest_common.sh@955 -- # kill 2864142 00:22:16.043 20:53:40 -- common/autotest_common.sh@960 -- # wait 2864142 00:22:16.043 20:53:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:16.043 20:53:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:16.043 20:53:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:16.043 20:53:40 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:16.043 20:53:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:16.043 20:53:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.043 20:53:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:16.043 20:53:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.594 20:53:42 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:18.594 00:22:18.594 real 0m15.329s 00:22:18.594 user 0m52.028s 00:22:18.594 sys 0m6.922s 00:22:18.594 20:53:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:18.594 20:53:42 -- common/autotest_common.sh@10 -- # set +x 00:22:18.594 ************************************ 00:22:18.594 END TEST nvmf_fio_host 00:22:18.594 ************************************ 00:22:18.594 20:53:42 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:18.594 20:53:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:18.594 20:53:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:18.594 20:53:42 -- common/autotest_common.sh@10 -- # set +x 00:22:18.594 ************************************ 00:22:18.594 START TEST nvmf_failover 00:22:18.594 ************************************ 00:22:18.594 20:53:42 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:18.594 * Looking for test storage... 00:22:18.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:18.594 20:53:42 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.594 20:53:42 -- nvmf/common.sh@7 -- # uname -s 00:22:18.594 20:53:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.594 20:53:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.594 20:53:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.594 20:53:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.594 20:53:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.594 20:53:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.594 20:53:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.594 20:53:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.594 20:53:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.594 20:53:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.594 20:53:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:18.594 20:53:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:18.594 20:53:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.594 20:53:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.594 20:53:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.594 20:53:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.594 20:53:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.594 20:53:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.594 20:53:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.594 20:53:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.594 20:53:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.594 20:53:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.594 20:53:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.594 20:53:43 -- paths/export.sh@5 -- # export PATH 00:22:18.594 20:53:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.594 20:53:43 -- nvmf/common.sh@47 -- # : 0 00:22:18.594 20:53:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:18.594 20:53:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:18.594 20:53:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.594 20:53:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.594 20:53:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.594 20:53:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:18.594 20:53:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:18.594 20:53:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:18.594 20:53:43 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:18.594 20:53:43 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:18.594 20:53:43 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:18.594 20:53:43 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:18.594 20:53:43 -- host/failover.sh@18 -- # nvmftestinit 00:22:18.594 20:53:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:18.594 20:53:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:18.594 20:53:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:18.594 20:53:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:18.594 20:53:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:18.594 20:53:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.594 20:53:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.594 20:53:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:18.594 20:53:43 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:18.594 20:53:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:18.594 20:53:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:18.594 20:53:43 -- common/autotest_common.sh@10 -- # set +x 00:22:25.257 20:53:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:25.257 20:53:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:25.257 20:53:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:25.257 20:53:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:25.257 20:53:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:25.257 20:53:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:25.257 20:53:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:25.257 20:53:49 -- nvmf/common.sh@295 -- # net_devs=() 00:22:25.257 20:53:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:25.257 20:53:49 -- nvmf/common.sh@296 -- # e810=() 00:22:25.257 20:53:49 -- nvmf/common.sh@296 -- # local -ga e810 00:22:25.257 20:53:49 -- nvmf/common.sh@297 -- # x722=() 00:22:25.257 20:53:49 -- nvmf/common.sh@297 -- # local -ga x722 00:22:25.257 20:53:49 -- nvmf/common.sh@298 -- # mlx=() 00:22:25.257 20:53:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:25.257 20:53:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.257 20:53:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.257 20:53:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.257 20:53:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.257 20:53:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.257 20:53:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.257 20:53:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.257 20:53:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.257 20:53:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.257 20:53:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.257 20:53:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.257 20:53:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:25.257 20:53:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:25.257 20:53:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:25.257 20:53:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.257 20:53:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:25.257 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:25.257 20:53:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.257 20:53:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:25.257 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:25.257 20:53:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:25.257 20:53:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.257 20:53:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.257 20:53:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:25.257 20:53:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.257 20:53:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:25.257 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:25.257 20:53:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.257 20:53:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.257 20:53:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.257 20:53:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:25.257 20:53:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.257 20:53:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:25.257 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:25.257 20:53:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.257 20:53:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:25.257 20:53:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:25.257 20:53:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:25.257 20:53:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:25.257 20:53:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.257 20:53:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.257 20:53:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.257 20:53:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:25.257 20:53:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.257 20:53:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.257 20:53:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:25.257 20:53:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.257 20:53:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.257 20:53:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:25.257 20:53:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:25.257 20:53:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.258 20:53:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.519 20:53:50 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.519 20:53:50 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.519 20:53:50 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:25.519 20:53:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.519 20:53:50 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.519 20:53:50 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.519 20:53:50 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:25.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.572 ms 00:22:25.519 00:22:25.519 --- 10.0.0.2 ping statistics --- 00:22:25.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.519 rtt min/avg/max/mdev = 0.572/0.572/0.572/0.000 ms 00:22:25.519 20:53:50 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:22:25.519 00:22:25.519 --- 10.0.0.1 ping statistics --- 00:22:25.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.519 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:22:25.519 20:53:50 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.519 20:53:50 -- nvmf/common.sh@411 -- # return 0 00:22:25.519 20:53:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:25.519 20:53:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.519 20:53:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:25.519 20:53:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:25.519 20:53:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.519 20:53:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:25.519 20:53:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:25.780 20:53:50 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:25.780 20:53:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:25.780 20:53:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:25.780 20:53:50 -- common/autotest_common.sh@10 -- # set +x 00:22:25.781 20:53:50 -- nvmf/common.sh@470 -- # nvmfpid=2869829 00:22:25.781 20:53:50 -- nvmf/common.sh@471 -- # waitforlisten 2869829 00:22:25.781 20:53:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:25.781 20:53:50 -- common/autotest_common.sh@817 -- # '[' -z 2869829 ']' 00:22:25.781 20:53:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.781 20:53:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:25.781 20:53:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.781 20:53:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:25.781 20:53:50 -- common/autotest_common.sh@10 -- # set +x 00:22:25.781 [2024-04-24 20:53:50.247509] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:22:25.781 [2024-04-24 20:53:50.247578] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.781 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.781 [2024-04-24 20:53:50.317694] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:25.781 [2024-04-24 20:53:50.391126] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.781 [2024-04-24 20:53:50.391164] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.781 [2024-04-24 20:53:50.391176] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.781 [2024-04-24 20:53:50.391183] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.781 [2024-04-24 20:53:50.391188] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.781 [2024-04-24 20:53:50.391295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:25.781 [2024-04-24 20:53:50.391435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.781 [2024-04-24 20:53:50.391436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.724 20:53:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:26.724 20:53:51 -- common/autotest_common.sh@850 -- # return 0 00:22:26.724 20:53:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:26.724 20:53:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:26.724 20:53:51 -- common/autotest_common.sh@10 -- # set +x 00:22:26.724 20:53:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.724 20:53:51 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:26.724 [2024-04-24 20:53:51.300073] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.724 20:53:51 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:26.986 Malloc0 00:22:26.986 20:53:51 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:27.247 20:53:51 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:27.508 20:53:51 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:27.769 [2024-04-24 20:53:52.176754] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:27.769 20:53:52 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:27.769 [2024-04-24 20:53:52.393341] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:28.029 20:53:52 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:28.029 [2024-04-24 20:53:52.605961] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:28.029 20:53:52 -- host/failover.sh@31 -- # bdevperf_pid=2870199 00:22:28.029 20:53:52 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.029 20:53:52 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:28.029 20:53:52 -- host/failover.sh@34 -- # waitforlisten 2870199 /var/tmp/bdevperf.sock 00:22:28.029 20:53:52 -- common/autotest_common.sh@817 -- # '[' -z 2870199 ']' 00:22:28.029 20:53:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.029 20:53:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:28.029 20:53:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.029 20:53:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:28.029 20:53:52 -- common/autotest_common.sh@10 -- # set +x 00:22:28.291 20:53:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:28.291 20:53:52 -- common/autotest_common.sh@850 -- # return 0 00:22:28.291 20:53:52 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:28.552 NVMe0n1 00:22:28.552 20:53:53 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:29.123 00:22:29.123 20:53:53 -- host/failover.sh@39 -- # run_test_pid=2870527 00:22:29.123 20:53:53 -- host/failover.sh@41 -- # sleep 1 00:22:29.123 20:53:53 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:30.065 20:53:54 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.326 [2024-04-24 20:53:54.818363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818398] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818404] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818409] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818413] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818418] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818427] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818436] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818444] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818453] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818458] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818462] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818470] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818487] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818505] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818513] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818519] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818523] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818527] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818532] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818536] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818540] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818544] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818549] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818553] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.326 [2024-04-24 20:53:54.818558] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818567] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818576] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818584] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818592] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818597] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818606] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818610] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818615] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 [2024-04-24 20:53:54.818626] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72c340 is same with the state(5) to be set 00:22:30.327 20:53:54 -- host/failover.sh@45 -- # sleep 3 00:22:33.624 20:53:57 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:33.624 00:22:33.624 20:53:58 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:33.885 [2024-04-24 20:53:58.335297] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72d1f0 is same with the state(5) to be set 00:22:33.885 [2024-04-24 20:53:58.335334] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72d1f0 is same with the state(5) to be set 00:22:33.885 [2024-04-24 20:53:58.335342] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72d1f0 is same with the state(5) to be set 00:22:33.885 [2024-04-24 20:53:58.335348] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72d1f0 is same with the state(5) to be set 00:22:33.885 [2024-04-24 20:53:58.335355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72d1f0 is same with the state(5) to be set 00:22:33.885 20:53:58 -- host/failover.sh@50 -- # sleep 3 00:22:37.188 20:54:01 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.188 [2024-04-24 20:54:01.556894] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.188 20:54:01 -- host/failover.sh@55 -- # sleep 1 00:22:38.132 20:54:02 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:38.393 [2024-04-24 20:54:02.776543] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776579] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776587] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776594] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776600] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776619] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776631] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776643] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776649] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 [2024-04-24 20:54:02.776668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x72ded0 is same with the state(5) to be set 00:22:38.393 20:54:02 -- host/failover.sh@59 -- # wait 2870527 00:22:45.056 0 00:22:45.056 20:54:08 -- host/failover.sh@61 -- # killprocess 2870199 00:22:45.056 20:54:08 -- common/autotest_common.sh@936 -- # '[' -z 2870199 ']' 00:22:45.056 20:54:08 -- common/autotest_common.sh@940 -- # kill -0 2870199 00:22:45.056 20:54:08 -- common/autotest_common.sh@941 -- # uname 00:22:45.056 20:54:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:45.056 20:54:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2870199 00:22:45.056 20:54:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:45.056 20:54:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:45.056 20:54:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2870199' 00:22:45.056 killing process with pid 2870199 00:22:45.056 20:54:08 -- common/autotest_common.sh@955 -- # kill 2870199 00:22:45.056 20:54:08 -- common/autotest_common.sh@960 -- # wait 2870199 00:22:45.056 20:54:08 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:45.056 [2024-04-24 20:53:52.682707] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:22:45.056 [2024-04-24 20:53:52.682768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2870199 ] 00:22:45.056 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.056 [2024-04-24 20:53:52.757928] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.056 [2024-04-24 20:53:52.819981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.056 Running I/O for 15 seconds... 00:22:45.056 [2024-04-24 20:53:54.819510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.056 [2024-04-24 20:53:54.819546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.056 [2024-04-24 20:53:54.819564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.056 [2024-04-24 20:53:54.819579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.056 [2024-04-24 20:53:54.819595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19213c0 is same with the state(5) to be set 00:22:45.056 [2024-04-24 20:53:54.819641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.819989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.819998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.820005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.820014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.820021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.056 [2024-04-24 20:53:54.820030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.056 [2024-04-24 20:53:54.820037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.057 [2024-04-24 20:53:54.820668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.057 [2024-04-24 20:53:54.820675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.820986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.820994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.058 [2024-04-24 20:53:54.821325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.058 [2024-04-24 20:53:54.821332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.059 [2024-04-24 20:53:54.821496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.059 [2024-04-24 20:53:54.821514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.059 [2024-04-24 20:53:54.821530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.059 [2024-04-24 20:53:54.821546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.059 [2024-04-24 20:53:54.821562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.059 [2024-04-24 20:53:54.821578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.059 [2024-04-24 20:53:54.821594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:54.821675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.059 [2024-04-24 20:53:54.821692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.059 [2024-04-24 20:53:54.821710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.059 [2024-04-24 20:53:54.821740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.059 [2024-04-24 20:53:54.821747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98696 len:8 PRP1 0x0 PRP2 0x0 00:22:45.059 [2024-04-24 20:53:54.821754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:54.821792] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1931820 was disconnected and freed. reset controller. 00:22:45.059 [2024-04-24 20:53:54.821802] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:45.059 [2024-04-24 20:53:54.821812] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:45.059 [2024-04-24 20:53:54.825297] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:45.059 [2024-04-24 20:53:54.825321] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19213c0 (9): Bad file descriptor 00:22:45.059 [2024-04-24 20:53:54.854283] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:45.059 [2024-04-24 20:53:58.335452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.059 [2024-04-24 20:53:58.335489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:58.335507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:58.335520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:58.335531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:58.335538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:58.335547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:58.335554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:58.335563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:58.335570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:58.335579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:58.335586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:58.335595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:58.335602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:58.335611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:58.335619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:58.335627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:58.335634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:58.335643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:58.335651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:58.335660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:58.335667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.059 [2024-04-24 20:53:58.335676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.059 [2024-04-24 20:53:58.335683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.335980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.335989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.060 [2024-04-24 20:53:58.335998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.060 [2024-04-24 20:53:58.336013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.336030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.336046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.336063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.336079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.336095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.336111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.336126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.336148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.336163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.336179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.336195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.060 [2024-04-24 20:53:58.336212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.060 [2024-04-24 20:53:58.336220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.061 [2024-04-24 20:53:58.336880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.061 [2024-04-24 20:53:58.336888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.062 [2024-04-24 20:53:58.336895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.336904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.062 [2024-04-24 20:53:58.336912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.336921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.062 [2024-04-24 20:53:58.336928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.336936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:94976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.062 [2024-04-24 20:53:58.336943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.336952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.062 [2024-04-24 20:53:58.336960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.336969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.062 [2024-04-24 20:53:58.336976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.336986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.062 [2024-04-24 20:53:58.336993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:95008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.062 [2024-04-24 20:53:58.337009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.062 [2024-04-24 20:53:58.337026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.062 [2024-04-24 20:53:58.337041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.062 [2024-04-24 20:53:58.337057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95040 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95048 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95056 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95064 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95072 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95080 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95088 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95096 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95104 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95112 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95120 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95128 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337396] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95136 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95144 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95152 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95160 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.062 [2024-04-24 20:53:58.337493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.062 [2024-04-24 20:53:58.337499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.062 [2024-04-24 20:53:58.337504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95168 len:8 PRP1 0x0 PRP2 0x0 00:22:45.062 [2024-04-24 20:53:58.337511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.337519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.337526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.337532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95176 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.337539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.337546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.337552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.337558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95184 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.337564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.337572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.337577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.337583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95192 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.337590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.337598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.337603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.337610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95200 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.337617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.337624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.337630] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.337636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95208 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.337643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.337650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.337656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.337662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95216 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.337668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.337676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.337682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.337687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95224 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.337695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.337702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.337707] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.337713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95232 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.337720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.337731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.337736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.337743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95240 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.337749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.337757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.337762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.337768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94248 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.337775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.337782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.337787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.337793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94256 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.337800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.337809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.337814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.337820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94264 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.348405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.348438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.348446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.348454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94272 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.348461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.348471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.348476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.348483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94280 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.348490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.348497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.348502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.348508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94288 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.348515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.348523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.063 [2024-04-24 20:53:58.348529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.063 [2024-04-24 20:53:58.348535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94296 len:8 PRP1 0x0 PRP2 0x0 00:22:45.063 [2024-04-24 20:53:58.348543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.348580] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1933670 was disconnected and freed. reset controller. 00:22:45.063 [2024-04-24 20:53:58.348592] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:45.063 [2024-04-24 20:53:58.348620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.063 [2024-04-24 20:53:58.348629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.348638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.063 [2024-04-24 20:53:58.348646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.348654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.063 [2024-04-24 20:53:58.348661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.348668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.063 [2024-04-24 20:53:58.348676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:53:58.348689] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:45.063 [2024-04-24 20:53:58.348736] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19213c0 (9): Bad file descriptor 00:22:45.063 [2024-04-24 20:53:58.352215] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:45.063 [2024-04-24 20:53:58.390908] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:45.063 [2024-04-24 20:54:02.776759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:28712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.063 [2024-04-24 20:54:02.776798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:54:02.776818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:28720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.063 [2024-04-24 20:54:02.776826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:54:02.776835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:28728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.063 [2024-04-24 20:54:02.776843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:54:02.776852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.063 [2024-04-24 20:54:02.776860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:54:02.776869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.063 [2024-04-24 20:54:02.776876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.063 [2024-04-24 20:54:02.776885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.063 [2024-04-24 20:54:02.776893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.776902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:28760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.064 [2024-04-24 20:54:02.776909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.776918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.064 [2024-04-24 20:54:02.776925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.776934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.064 [2024-04-24 20:54:02.776941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.776950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:28784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.064 [2024-04-24 20:54:02.776957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.776966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.064 [2024-04-24 20:54:02.776973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.776987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.776994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.064 [2024-04-24 20:54:02.777111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.064 [2024-04-24 20:54:02.777127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.064 [2024-04-24 20:54:02.777143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:28824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.064 [2024-04-24 20:54:02.777159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.064 [2024-04-24 20:54:02.777305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.064 [2024-04-24 20:54:02.777534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.064 [2024-04-24 20:54:02.777544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:28112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:28120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.065 [2024-04-24 20:54:02.777697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.065 [2024-04-24 20:54:02.777713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:28168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:28192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:28240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:28256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.065 [2024-04-24 20:54:02.777979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.777988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.777994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.778004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.778011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.778023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.778032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.778042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.778051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.778061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.778068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.778077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.778084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.065 [2024-04-24 20:54:02.778094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.065 [2024-04-24 20:54:02.778103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:28320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:28344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:28384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:28464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:28472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:28480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:28496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:28512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:28520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:28552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:28560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:28568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:28576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:28592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.066 [2024-04-24 20:54:02.778760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.066 [2024-04-24 20:54:02.778769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.067 [2024-04-24 20:54:02.778776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.778785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.067 [2024-04-24 20:54:02.778792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.778801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.067 [2024-04-24 20:54:02.778808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.778817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.067 [2024-04-24 20:54:02.778824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.778834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.067 [2024-04-24 20:54:02.778841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.778851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:28672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.067 [2024-04-24 20:54:02.778858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.778871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.067 [2024-04-24 20:54:02.778878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.778887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.067 [2024-04-24 20:54:02.778895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.778905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:45.067 [2024-04-24 20:54:02.778912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.778921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19373e0 is same with the state(5) to be set 00:22:45.067 [2024-04-24 20:54:02.778929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:45.067 [2024-04-24 20:54:02.778935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:45.067 [2024-04-24 20:54:02.778942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:28704 len:8 PRP1 0x0 PRP2 0x0 00:22:45.067 [2024-04-24 20:54:02.778950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.778987] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19373e0 was disconnected and freed. reset controller. 00:22:45.067 [2024-04-24 20:54:02.778997] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:45.067 [2024-04-24 20:54:02.779018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.067 [2024-04-24 20:54:02.779026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.779034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.067 [2024-04-24 20:54:02.779041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.779049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.067 [2024-04-24 20:54:02.779057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.779064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:45.067 [2024-04-24 20:54:02.779071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.067 [2024-04-24 20:54:02.779079] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:45.067 [2024-04-24 20:54:02.782637] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:45.067 [2024-04-24 20:54:02.782662] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19213c0 (9): Bad file descriptor 00:22:45.067 [2024-04-24 20:54:02.820413] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:45.067 00:22:45.067 Latency(us) 00:22:45.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.067 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:45.067 Verification LBA range: start 0x0 length 0x4000 00:22:45.067 NVMe0n1 : 15.01 9114.86 35.60 254.65 0.00 13633.33 529.07 20862.29 00:22:45.067 =================================================================================================================== 00:22:45.067 Total : 9114.86 35.60 254.65 0.00 13633.33 529.07 20862.29 00:22:45.067 Received shutdown signal, test time was about 15.000000 seconds 00:22:45.067 00:22:45.067 Latency(us) 00:22:45.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.067 =================================================================================================================== 00:22:45.067 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:45.067 20:54:08 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:45.067 20:54:08 -- host/failover.sh@65 -- # count=3 00:22:45.067 20:54:08 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:45.067 20:54:08 -- host/failover.sh@73 -- # bdevperf_pid=2873696 00:22:45.067 20:54:08 -- host/failover.sh@75 -- # waitforlisten 2873696 /var/tmp/bdevperf.sock 00:22:45.067 20:54:08 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:45.067 20:54:08 -- common/autotest_common.sh@817 -- # '[' -z 2873696 ']' 00:22:45.067 20:54:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.067 20:54:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:45.067 20:54:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.067 20:54:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:45.067 20:54:08 -- common/autotest_common.sh@10 -- # set +x 00:22:45.067 20:54:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:45.067 20:54:09 -- common/autotest_common.sh@850 -- # return 0 00:22:45.067 20:54:09 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:45.067 [2024-04-24 20:54:09.431599] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:45.067 20:54:09 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:45.067 [2024-04-24 20:54:09.648176] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:45.067 20:54:09 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:45.637 NVMe0n1 00:22:45.637 20:54:09 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:45.897 00:22:45.897 20:54:10 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:46.158 00:22:46.158 20:54:10 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:46.158 20:54:10 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:46.419 20:54:10 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:46.680 20:54:11 -- host/failover.sh@87 -- # sleep 3 00:22:50.001 20:54:14 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:50.001 20:54:14 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:50.001 20:54:14 -- host/failover.sh@90 -- # run_test_pid=2875130 00:22:50.001 20:54:14 -- host/failover.sh@92 -- # wait 2875130 00:22:50.001 20:54:14 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:50.941 0 00:22:50.941 20:54:15 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:50.941 [2024-04-24 20:54:09.036518] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:22:50.941 [2024-04-24 20:54:09.036576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2873696 ] 00:22:50.941 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.941 [2024-04-24 20:54:09.112247] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.941 [2024-04-24 20:54:09.174402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.941 [2024-04-24 20:54:11.170581] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:50.941 [2024-04-24 20:54:11.170629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.941 [2024-04-24 20:54:11.170641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.941 [2024-04-24 20:54:11.170650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.941 [2024-04-24 20:54:11.170658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.941 [2024-04-24 20:54:11.170666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.941 [2024-04-24 20:54:11.170673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.941 [2024-04-24 20:54:11.170681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.941 [2024-04-24 20:54:11.170688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.941 [2024-04-24 20:54:11.170695] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:50.941 [2024-04-24 20:54:11.170732] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:50.941 [2024-04-24 20:54:11.170748] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c03c0 (9): Bad file descriptor 00:22:50.941 [2024-04-24 20:54:11.178491] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:50.941 Running I/O for 1 seconds... 00:22:50.941 00:22:50.941 Latency(us) 00:22:50.941 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.941 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:50.941 Verification LBA range: start 0x0 length 0x4000 00:22:50.941 NVMe0n1 : 1.01 8868.61 34.64 0.00 0.00 14365.95 3017.39 14745.60 00:22:50.941 =================================================================================================================== 00:22:50.941 Total : 8868.61 34.64 0.00 0.00 14365.95 3017.39 14745.60 00:22:50.941 20:54:15 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:50.941 20:54:15 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:51.201 20:54:15 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:51.462 20:54:15 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:51.462 20:54:15 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:51.722 20:54:16 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:51.982 20:54:16 -- host/failover.sh@101 -- # sleep 3 00:22:55.281 20:54:19 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.281 20:54:19 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:55.281 20:54:19 -- host/failover.sh@108 -- # killprocess 2873696 00:22:55.281 20:54:19 -- common/autotest_common.sh@936 -- # '[' -z 2873696 ']' 00:22:55.281 20:54:19 -- common/autotest_common.sh@940 -- # kill -0 2873696 00:22:55.281 20:54:19 -- common/autotest_common.sh@941 -- # uname 00:22:55.281 20:54:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:55.281 20:54:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2873696 00:22:55.281 20:54:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:55.281 20:54:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:55.281 20:54:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2873696' 00:22:55.281 killing process with pid 2873696 00:22:55.281 20:54:19 -- common/autotest_common.sh@955 -- # kill 2873696 00:22:55.281 20:54:19 -- common/autotest_common.sh@960 -- # wait 2873696 00:22:55.281 20:54:19 -- host/failover.sh@110 -- # sync 00:22:55.281 20:54:19 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:55.542 20:54:20 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:55.542 20:54:20 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:55.542 20:54:20 -- host/failover.sh@116 -- # nvmftestfini 00:22:55.542 20:54:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:55.542 20:54:20 -- nvmf/common.sh@117 -- # sync 00:22:55.542 20:54:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:55.542 20:54:20 -- nvmf/common.sh@120 -- # set +e 00:22:55.542 20:54:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:55.542 20:54:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:55.542 rmmod nvme_tcp 00:22:55.542 rmmod nvme_fabrics 00:22:55.542 rmmod nvme_keyring 00:22:55.542 20:54:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:55.542 20:54:20 -- nvmf/common.sh@124 -- # set -e 00:22:55.542 20:54:20 -- nvmf/common.sh@125 -- # return 0 00:22:55.542 20:54:20 -- nvmf/common.sh@478 -- # '[' -n 2869829 ']' 00:22:55.542 20:54:20 -- nvmf/common.sh@479 -- # killprocess 2869829 00:22:55.542 20:54:20 -- common/autotest_common.sh@936 -- # '[' -z 2869829 ']' 00:22:55.542 20:54:20 -- common/autotest_common.sh@940 -- # kill -0 2869829 00:22:55.542 20:54:20 -- common/autotest_common.sh@941 -- # uname 00:22:55.542 20:54:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:55.542 20:54:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2869829 00:22:55.542 20:54:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:55.542 20:54:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:55.542 20:54:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2869829' 00:22:55.542 killing process with pid 2869829 00:22:55.542 20:54:20 -- common/autotest_common.sh@955 -- # kill 2869829 00:22:55.542 20:54:20 -- common/autotest_common.sh@960 -- # wait 2869829 00:22:55.802 20:54:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:55.803 20:54:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:55.803 20:54:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:55.803 20:54:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:55.803 20:54:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:55.803 20:54:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.803 20:54:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.803 20:54:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.347 20:54:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:58.347 00:22:58.347 real 0m39.498s 00:22:58.347 user 2m2.564s 00:22:58.347 sys 0m8.299s 00:22:58.347 20:54:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:58.347 20:54:22 -- common/autotest_common.sh@10 -- # set +x 00:22:58.347 ************************************ 00:22:58.347 END TEST nvmf_failover 00:22:58.347 ************************************ 00:22:58.347 20:54:22 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:58.347 20:54:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:58.347 20:54:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:58.347 20:54:22 -- common/autotest_common.sh@10 -- # set +x 00:22:58.347 ************************************ 00:22:58.347 START TEST nvmf_discovery 00:22:58.347 ************************************ 00:22:58.347 20:54:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:58.347 * Looking for test storage... 00:22:58.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:58.347 20:54:22 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:58.347 20:54:22 -- nvmf/common.sh@7 -- # uname -s 00:22:58.347 20:54:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.347 20:54:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.347 20:54:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.347 20:54:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.347 20:54:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.347 20:54:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.347 20:54:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.347 20:54:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.347 20:54:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.347 20:54:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.347 20:54:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:58.347 20:54:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:22:58.347 20:54:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.347 20:54:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.347 20:54:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:58.347 20:54:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.347 20:54:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:58.347 20:54:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.347 20:54:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.347 20:54:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.347 20:54:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.347 20:54:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.348 20:54:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.348 20:54:22 -- paths/export.sh@5 -- # export PATH 00:22:58.348 20:54:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.348 20:54:22 -- nvmf/common.sh@47 -- # : 0 00:22:58.348 20:54:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.348 20:54:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.348 20:54:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.348 20:54:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.348 20:54:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.348 20:54:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.348 20:54:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.348 20:54:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.348 20:54:22 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:58.348 20:54:22 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:58.348 20:54:22 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:58.348 20:54:22 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:58.348 20:54:22 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:58.348 20:54:22 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:58.348 20:54:22 -- host/discovery.sh@25 -- # nvmftestinit 00:22:58.348 20:54:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:58.348 20:54:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.348 20:54:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:58.348 20:54:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:58.348 20:54:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:58.348 20:54:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.348 20:54:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.348 20:54:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.348 20:54:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:22:58.348 20:54:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:58.348 20:54:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:58.348 20:54:22 -- common/autotest_common.sh@10 -- # set +x 00:23:06.496 20:54:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:06.496 20:54:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:06.496 20:54:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:06.496 20:54:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:06.496 20:54:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:06.496 20:54:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:06.496 20:54:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:06.496 20:54:29 -- nvmf/common.sh@295 -- # net_devs=() 00:23:06.496 20:54:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:06.496 20:54:29 -- nvmf/common.sh@296 -- # e810=() 00:23:06.496 20:54:29 -- nvmf/common.sh@296 -- # local -ga e810 00:23:06.496 20:54:29 -- nvmf/common.sh@297 -- # x722=() 00:23:06.496 20:54:29 -- nvmf/common.sh@297 -- # local -ga x722 00:23:06.496 20:54:29 -- nvmf/common.sh@298 -- # mlx=() 00:23:06.496 20:54:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:06.496 20:54:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.496 20:54:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.496 20:54:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.496 20:54:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.496 20:54:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.496 20:54:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.496 20:54:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.496 20:54:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.496 20:54:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.496 20:54:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.496 20:54:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.496 20:54:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:06.496 20:54:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:06.496 20:54:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:06.496 20:54:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.496 20:54:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:06.496 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:06.496 20:54:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.496 20:54:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:06.496 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:06.496 20:54:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:06.496 20:54:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.496 20:54:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.496 20:54:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:06.496 20:54:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.496 20:54:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:06.496 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:06.496 20:54:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.496 20:54:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.496 20:54:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.496 20:54:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:06.496 20:54:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.496 20:54:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:06.496 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:06.496 20:54:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.496 20:54:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:06.496 20:54:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:06.496 20:54:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:06.496 20:54:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:06.496 20:54:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.496 20:54:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.496 20:54:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.496 20:54:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:06.496 20:54:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.496 20:54:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.496 20:54:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:06.496 20:54:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.496 20:54:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.496 20:54:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:06.496 20:54:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:06.496 20:54:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.496 20:54:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.496 20:54:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.496 20:54:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.496 20:54:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:06.496 20:54:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.496 20:54:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.496 20:54:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.496 20:54:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:06.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:23:06.496 00:23:06.496 --- 10.0.0.2 ping statistics --- 00:23:06.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.496 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:23:06.496 20:54:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:23:06.497 00:23:06.497 --- 10.0.0.1 ping statistics --- 00:23:06.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.497 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:23:06.497 20:54:30 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.497 20:54:30 -- nvmf/common.sh@411 -- # return 0 00:23:06.497 20:54:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:06.497 20:54:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.497 20:54:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:06.497 20:54:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:06.497 20:54:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.497 20:54:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:06.497 20:54:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:06.497 20:54:30 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:06.497 20:54:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:06.497 20:54:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:06.497 20:54:30 -- common/autotest_common.sh@10 -- # set +x 00:23:06.497 20:54:30 -- nvmf/common.sh@470 -- # nvmfpid=2880325 00:23:06.497 20:54:30 -- nvmf/common.sh@471 -- # waitforlisten 2880325 00:23:06.497 20:54:30 -- common/autotest_common.sh@817 -- # '[' -z 2880325 ']' 00:23:06.497 20:54:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.497 20:54:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:06.497 20:54:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.497 20:54:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:06.497 20:54:30 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:06.497 20:54:30 -- common/autotest_common.sh@10 -- # set +x 00:23:06.497 [2024-04-24 20:54:30.115822] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:23:06.497 [2024-04-24 20:54:30.115885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.497 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.497 [2024-04-24 20:54:30.188190] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.497 [2024-04-24 20:54:30.258987] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.497 [2024-04-24 20:54:30.259027] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.497 [2024-04-24 20:54:30.259039] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.497 [2024-04-24 20:54:30.259045] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.497 [2024-04-24 20:54:30.259051] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.497 [2024-04-24 20:54:30.259079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.497 20:54:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:06.497 20:54:30 -- common/autotest_common.sh@850 -- # return 0 00:23:06.497 20:54:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:06.497 20:54:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:06.497 20:54:30 -- common/autotest_common.sh@10 -- # set +x 00:23:06.497 20:54:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.497 20:54:31 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:06.497 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.497 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:06.497 [2024-04-24 20:54:31.038101] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.497 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.497 20:54:31 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:06.497 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.497 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:06.497 [2024-04-24 20:54:31.046240] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:06.497 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.497 20:54:31 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:06.497 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.497 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:06.497 null0 00:23:06.497 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.497 20:54:31 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:06.497 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.497 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:06.497 null1 00:23:06.497 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.497 20:54:31 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:06.497 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.497 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:06.497 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.497 20:54:31 -- host/discovery.sh@45 -- # hostpid=2880506 00:23:06.497 20:54:31 -- host/discovery.sh@46 -- # waitforlisten 2880506 /tmp/host.sock 00:23:06.497 20:54:31 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:06.497 20:54:31 -- common/autotest_common.sh@817 -- # '[' -z 2880506 ']' 00:23:06.497 20:54:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:06.497 20:54:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:06.497 20:54:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:06.497 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:06.497 20:54:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:06.497 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:06.497 [2024-04-24 20:54:31.125019] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:23:06.497 [2024-04-24 20:54:31.125066] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880506 ] 00:23:06.759 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.759 [2024-04-24 20:54:31.199237] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.759 [2024-04-24 20:54:31.261691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.759 20:54:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:06.759 20:54:31 -- common/autotest_common.sh@850 -- # return 0 00:23:06.759 20:54:31 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.759 20:54:31 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:06.759 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.759 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:06.759 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.759 20:54:31 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:06.759 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.759 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:06.759 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.759 20:54:31 -- host/discovery.sh@72 -- # notify_id=0 00:23:06.759 20:54:31 -- host/discovery.sh@83 -- # get_subsystem_names 00:23:06.759 20:54:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:06.759 20:54:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:06.759 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.759 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:06.759 20:54:31 -- host/discovery.sh@59 -- # sort 00:23:06.759 20:54:31 -- host/discovery.sh@59 -- # xargs 00:23:06.759 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.021 20:54:31 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:07.021 20:54:31 -- host/discovery.sh@84 -- # get_bdev_list 00:23:07.021 20:54:31 -- host/discovery.sh@55 -- # xargs 00:23:07.021 20:54:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.021 20:54:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:07.021 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.021 20:54:31 -- host/discovery.sh@55 -- # sort 00:23:07.021 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.021 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.021 20:54:31 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:07.021 20:54:31 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:07.021 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.021 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.021 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.021 20:54:31 -- host/discovery.sh@87 -- # get_subsystem_names 00:23:07.021 20:54:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:07.021 20:54:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:07.021 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.021 20:54:31 -- host/discovery.sh@59 -- # sort 00:23:07.021 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.021 20:54:31 -- host/discovery.sh@59 -- # xargs 00:23:07.021 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.021 20:54:31 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:07.021 20:54:31 -- host/discovery.sh@88 -- # get_bdev_list 00:23:07.021 20:54:31 -- host/discovery.sh@55 -- # sort 00:23:07.021 20:54:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.021 20:54:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:07.021 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.021 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.021 20:54:31 -- host/discovery.sh@55 -- # xargs 00:23:07.021 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.022 20:54:31 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:07.022 20:54:31 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:07.022 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.022 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.022 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.022 20:54:31 -- host/discovery.sh@91 -- # get_subsystem_names 00:23:07.022 20:54:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:07.022 20:54:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:07.022 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.022 20:54:31 -- host/discovery.sh@59 -- # sort 00:23:07.022 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.022 20:54:31 -- host/discovery.sh@59 -- # xargs 00:23:07.022 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.022 20:54:31 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:07.022 20:54:31 -- host/discovery.sh@92 -- # get_bdev_list 00:23:07.022 20:54:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.022 20:54:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:07.022 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.022 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.022 20:54:31 -- host/discovery.sh@55 -- # sort 00:23:07.022 20:54:31 -- host/discovery.sh@55 -- # xargs 00:23:07.283 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.283 20:54:31 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:07.283 20:54:31 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:07.283 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.283 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.283 [2024-04-24 20:54:31.707968] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.283 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.283 20:54:31 -- host/discovery.sh@97 -- # get_subsystem_names 00:23:07.283 20:54:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:07.283 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.283 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.283 20:54:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:07.283 20:54:31 -- host/discovery.sh@59 -- # sort 00:23:07.283 20:54:31 -- host/discovery.sh@59 -- # xargs 00:23:07.283 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.283 20:54:31 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:07.283 20:54:31 -- host/discovery.sh@98 -- # get_bdev_list 00:23:07.283 20:54:31 -- host/discovery.sh@55 -- # sort 00:23:07.283 20:54:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.283 20:54:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:07.283 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.283 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.283 20:54:31 -- host/discovery.sh@55 -- # xargs 00:23:07.283 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.283 20:54:31 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:07.283 20:54:31 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:07.283 20:54:31 -- host/discovery.sh@79 -- # expected_count=0 00:23:07.283 20:54:31 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:07.283 20:54:31 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:07.283 20:54:31 -- common/autotest_common.sh@901 -- # local max=10 00:23:07.283 20:54:31 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:07.283 20:54:31 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:07.283 20:54:31 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:07.284 20:54:31 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:07.284 20:54:31 -- host/discovery.sh@74 -- # jq '. | length' 00:23:07.284 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.284 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.284 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.284 20:54:31 -- host/discovery.sh@74 -- # notification_count=0 00:23:07.284 20:54:31 -- host/discovery.sh@75 -- # notify_id=0 00:23:07.284 20:54:31 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:07.284 20:54:31 -- common/autotest_common.sh@904 -- # return 0 00:23:07.284 20:54:31 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:07.284 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.284 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.284 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.284 20:54:31 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:07.284 20:54:31 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:07.284 20:54:31 -- common/autotest_common.sh@901 -- # local max=10 00:23:07.284 20:54:31 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:07.284 20:54:31 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:07.284 20:54:31 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:07.284 20:54:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:07.284 20:54:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:07.284 20:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.284 20:54:31 -- host/discovery.sh@59 -- # sort 00:23:07.284 20:54:31 -- common/autotest_common.sh@10 -- # set +x 00:23:07.284 20:54:31 -- host/discovery.sh@59 -- # xargs 00:23:07.284 20:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.548 20:54:31 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:23:07.548 20:54:31 -- common/autotest_common.sh@906 -- # sleep 1 00:23:07.812 [2024-04-24 20:54:32.406744] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:07.812 [2024-04-24 20:54:32.406766] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:07.812 [2024-04-24 20:54:32.406780] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:08.072 [2024-04-24 20:54:32.537202] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:08.333 [2024-04-24 20:54:32.756981] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:08.333 [2024-04-24 20:54:32.757005] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:08.333 20:54:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.333 20:54:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:08.333 20:54:32 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:08.333 20:54:32 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:08.333 20:54:32 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:08.333 20:54:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.333 20:54:32 -- common/autotest_common.sh@10 -- # set +x 00:23:08.333 20:54:32 -- host/discovery.sh@59 -- # sort 00:23:08.333 20:54:32 -- host/discovery.sh@59 -- # xargs 00:23:08.333 20:54:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.594 20:54:32 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.594 20:54:32 -- common/autotest_common.sh@904 -- # return 0 00:23:08.594 20:54:32 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:08.594 20:54:32 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:08.594 20:54:32 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.594 20:54:32 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.594 20:54:32 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:08.594 20:54:32 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:08.594 20:54:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.594 20:54:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:08.594 20:54:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.594 20:54:32 -- host/discovery.sh@55 -- # sort 00:23:08.594 20:54:32 -- common/autotest_common.sh@10 -- # set +x 00:23:08.594 20:54:32 -- host/discovery.sh@55 -- # xargs 00:23:08.594 20:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.594 20:54:33 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:08.594 20:54:33 -- common/autotest_common.sh@904 -- # return 0 00:23:08.594 20:54:33 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:08.594 20:54:33 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:08.594 20:54:33 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.594 20:54:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.594 20:54:33 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:08.594 20:54:33 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:08.594 20:54:33 -- host/discovery.sh@63 -- # xargs 00:23:08.594 20:54:33 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:08.594 20:54:33 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:08.594 20:54:33 -- host/discovery.sh@63 -- # sort -n 00:23:08.594 20:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.594 20:54:33 -- common/autotest_common.sh@10 -- # set +x 00:23:08.594 20:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.594 20:54:33 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:23:08.594 20:54:33 -- common/autotest_common.sh@904 -- # return 0 00:23:08.594 20:54:33 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:08.594 20:54:33 -- host/discovery.sh@79 -- # expected_count=1 00:23:08.594 20:54:33 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:08.594 20:54:33 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:08.594 20:54:33 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.594 20:54:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.594 20:54:33 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:08.594 20:54:33 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:08.594 20:54:33 -- host/discovery.sh@74 -- # jq '. | length' 00:23:08.594 20:54:33 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:08.594 20:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.594 20:54:33 -- common/autotest_common.sh@10 -- # set +x 00:23:08.594 20:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.594 20:54:33 -- host/discovery.sh@74 -- # notification_count=1 00:23:08.594 20:54:33 -- host/discovery.sh@75 -- # notify_id=1 00:23:08.594 20:54:33 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:08.594 20:54:33 -- common/autotest_common.sh@904 -- # return 0 00:23:08.594 20:54:33 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:08.594 20:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.594 20:54:33 -- common/autotest_common.sh@10 -- # set +x 00:23:08.594 20:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.594 20:54:33 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:08.594 20:54:33 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:08.594 20:54:33 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.594 20:54:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.594 20:54:33 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:08.594 20:54:33 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:08.594 20:54:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.594 20:54:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:08.594 20:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.594 20:54:33 -- host/discovery.sh@55 -- # sort 00:23:08.594 20:54:33 -- common/autotest_common.sh@10 -- # set +x 00:23:08.594 20:54:33 -- host/discovery.sh@55 -- # xargs 00:23:08.594 20:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.594 20:54:33 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:08.594 20:54:33 -- common/autotest_common.sh@904 -- # return 0 00:23:08.594 20:54:33 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:08.594 20:54:33 -- host/discovery.sh@79 -- # expected_count=1 00:23:08.594 20:54:33 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:08.594 20:54:33 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:08.594 20:54:33 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.594 20:54:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.594 20:54:33 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:08.594 20:54:33 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:08.594 20:54:33 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:08.594 20:54:33 -- host/discovery.sh@74 -- # jq '. | length' 00:23:08.594 20:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.594 20:54:33 -- common/autotest_common.sh@10 -- # set +x 00:23:08.594 20:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.855 20:54:33 -- host/discovery.sh@74 -- # notification_count=1 00:23:08.855 20:54:33 -- host/discovery.sh@75 -- # notify_id=2 00:23:08.855 20:54:33 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:08.855 20:54:33 -- common/autotest_common.sh@904 -- # return 0 00:23:08.855 20:54:33 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:08.855 20:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.855 20:54:33 -- common/autotest_common.sh@10 -- # set +x 00:23:08.855 [2024-04-24 20:54:33.248286] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:08.855 [2024-04-24 20:54:33.248818] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:08.855 [2024-04-24 20:54:33.248843] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:08.855 20:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.855 20:54:33 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:08.855 20:54:33 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:08.855 20:54:33 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.855 20:54:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.855 20:54:33 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:08.855 20:54:33 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:08.855 20:54:33 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:08.855 20:54:33 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:08.855 20:54:33 -- host/discovery.sh@59 -- # sort 00:23:08.855 20:54:33 -- host/discovery.sh@59 -- # xargs 00:23:08.855 20:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.855 20:54:33 -- common/autotest_common.sh@10 -- # set +x 00:23:08.855 20:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.855 20:54:33 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.855 20:54:33 -- common/autotest_common.sh@904 -- # return 0 00:23:08.855 20:54:33 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:08.855 20:54:33 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:08.855 20:54:33 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.855 20:54:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.855 20:54:33 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:08.855 20:54:33 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:08.856 20:54:33 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.856 20:54:33 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:08.856 20:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.856 20:54:33 -- host/discovery.sh@55 -- # sort 00:23:08.856 20:54:33 -- common/autotest_common.sh@10 -- # set +x 00:23:08.856 20:54:33 -- host/discovery.sh@55 -- # xargs 00:23:08.856 20:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.856 20:54:33 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:08.856 20:54:33 -- common/autotest_common.sh@904 -- # return 0 00:23:08.856 20:54:33 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:08.856 20:54:33 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:08.856 20:54:33 -- common/autotest_common.sh@901 -- # local max=10 00:23:08.856 20:54:33 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:08.856 20:54:33 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:08.856 20:54:33 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:08.856 20:54:33 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:08.856 20:54:33 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:08.856 20:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.856 20:54:33 -- host/discovery.sh@63 -- # sort -n 00:23:08.856 20:54:33 -- common/autotest_common.sh@10 -- # set +x 00:23:08.856 20:54:33 -- host/discovery.sh@63 -- # xargs 00:23:08.856 [2024-04-24 20:54:33.377266] bdev_nvme.c:6843:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:08.856 20:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.856 20:54:33 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:08.856 20:54:33 -- common/autotest_common.sh@906 -- # sleep 1 00:23:09.116 [2024-04-24 20:54:33.644591] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:09.116 [2024-04-24 20:54:33.644612] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:09.116 [2024-04-24 20:54:33.644618] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:10.059 20:54:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.059 20:54:34 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:10.059 20:54:34 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:10.059 20:54:34 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:10.059 20:54:34 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:10.059 20:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.059 20:54:34 -- host/discovery.sh@63 -- # sort -n 00:23:10.059 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.059 20:54:34 -- host/discovery.sh@63 -- # xargs 00:23:10.059 20:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.059 20:54:34 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:10.059 20:54:34 -- common/autotest_common.sh@904 -- # return 0 00:23:10.059 20:54:34 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:10.059 20:54:34 -- host/discovery.sh@79 -- # expected_count=0 00:23:10.059 20:54:34 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:10.059 20:54:34 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:10.060 20:54:34 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.060 20:54:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.060 20:54:34 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:10.060 20:54:34 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:10.060 20:54:34 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:10.060 20:54:34 -- host/discovery.sh@74 -- # jq '. | length' 00:23:10.060 20:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.060 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.060 20:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.060 20:54:34 -- host/discovery.sh@74 -- # notification_count=0 00:23:10.060 20:54:34 -- host/discovery.sh@75 -- # notify_id=2 00:23:10.060 20:54:34 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:10.060 20:54:34 -- common/autotest_common.sh@904 -- # return 0 00:23:10.060 20:54:34 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:10.060 20:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.060 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.060 [2024-04-24 20:54:34.528161] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:10.060 [2024-04-24 20:54:34.528182] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:10.060 [2024-04-24 20:54:34.528566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.060 [2024-04-24 20:54:34.528584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.060 [2024-04-24 20:54:34.528592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.060 [2024-04-24 20:54:34.528600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.060 [2024-04-24 20:54:34.528608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.060 [2024-04-24 20:54:34.528615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.060 [2024-04-24 20:54:34.528622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:10.060 [2024-04-24 20:54:34.528633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:10.060 [2024-04-24 20:54:34.528640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c80cc0 is same with the state(5) to be set 00:23:10.060 20:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.060 20:54:34 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:10.060 20:54:34 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:10.060 20:54:34 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.060 20:54:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.060 20:54:34 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:10.060 20:54:34 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:10.060 20:54:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:10.060 [2024-04-24 20:54:34.538576] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c80cc0 (9): Bad file descriptor 00:23:10.060 20:54:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:10.060 20:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.060 20:54:34 -- host/discovery.sh@59 -- # sort 00:23:10.060 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.060 20:54:34 -- host/discovery.sh@59 -- # xargs 00:23:10.060 [2024-04-24 20:54:34.548614] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:10.060 [2024-04-24 20:54:34.549055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.060 [2024-04-24 20:54:34.549404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.060 [2024-04-24 20:54:34.549418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c80cc0 with addr=10.0.0.2, port=4420 00:23:10.060 [2024-04-24 20:54:34.549427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c80cc0 is same with the state(5) to be set 00:23:10.060 [2024-04-24 20:54:34.549446] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c80cc0 (9): Bad file descriptor 00:23:10.060 [2024-04-24 20:54:34.549471] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:10.060 [2024-04-24 20:54:34.549479] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:10.060 [2024-04-24 20:54:34.549487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:10.060 [2024-04-24 20:54:34.549502] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.060 20:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.060 [2024-04-24 20:54:34.558671] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:10.060 [2024-04-24 20:54:34.559039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.060 [2024-04-24 20:54:34.559416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.060 [2024-04-24 20:54:34.559429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c80cc0 with addr=10.0.0.2, port=4420 00:23:10.060 [2024-04-24 20:54:34.559439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c80cc0 is same with the state(5) to be set 00:23:10.060 [2024-04-24 20:54:34.559457] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c80cc0 (9): Bad file descriptor 00:23:10.060 [2024-04-24 20:54:34.559468] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:10.060 [2024-04-24 20:54:34.559475] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:10.060 [2024-04-24 20:54:34.559483] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:10.060 [2024-04-24 20:54:34.559502] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.060 [2024-04-24 20:54:34.568729] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:10.060 [2024-04-24 20:54:34.569064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.060 [2024-04-24 20:54:34.569416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.060 [2024-04-24 20:54:34.569426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c80cc0 with addr=10.0.0.2, port=4420 00:23:10.060 [2024-04-24 20:54:34.569433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c80cc0 is same with the state(5) to be set 00:23:10.060 [2024-04-24 20:54:34.569445] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c80cc0 (9): Bad file descriptor 00:23:10.060 [2024-04-24 20:54:34.569455] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:10.060 [2024-04-24 20:54:34.569461] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:10.060 [2024-04-24 20:54:34.569468] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:10.060 [2024-04-24 20:54:34.569479] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.060 [2024-04-24 20:54:34.578784] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:10.060 [2024-04-24 20:54:34.579123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.060 [2024-04-24 20:54:34.579458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.060 [2024-04-24 20:54:34.579470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c80cc0 with addr=10.0.0.2, port=4420 00:23:10.060 [2024-04-24 20:54:34.579477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c80cc0 is same with the state(5) to be set 00:23:10.060 [2024-04-24 20:54:34.579488] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c80cc0 (9): Bad file descriptor 00:23:10.060 [2024-04-24 20:54:34.579498] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:10.060 [2024-04-24 20:54:34.579505] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:10.060 [2024-04-24 20:54:34.579511] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:10.060 [2024-04-24 20:54:34.579522] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.060 20:54:34 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.060 20:54:34 -- common/autotest_common.sh@904 -- # return 0 00:23:10.060 20:54:34 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:10.060 20:54:34 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:10.060 20:54:34 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.060 20:54:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.060 20:54:34 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:10.060 [2024-04-24 20:54:34.588840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:10.060 [2024-04-24 20:54:34.589154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.060 [2024-04-24 20:54:34.589462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.060 [2024-04-24 20:54:34.589473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c80cc0 with addr=10.0.0.2, port=4420 00:23:10.060 [2024-04-24 20:54:34.589480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c80cc0 is same with the state(5) to be set 00:23:10.060 [2024-04-24 20:54:34.589491] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c80cc0 (9): Bad file descriptor 00:23:10.060 [2024-04-24 20:54:34.589504] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:10.060 [2024-04-24 20:54:34.589511] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:10.060 [2024-04-24 20:54:34.589517] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:10.060 [2024-04-24 20:54:34.589528] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.060 20:54:34 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:10.060 20:54:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.060 20:54:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.060 20:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.061 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.061 20:54:34 -- host/discovery.sh@55 -- # sort 00:23:10.061 20:54:34 -- host/discovery.sh@55 -- # xargs 00:23:10.061 [2024-04-24 20:54:34.598890] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:10.061 [2024-04-24 20:54:34.599251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.061 [2024-04-24 20:54:34.599598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.061 [2024-04-24 20:54:34.599608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c80cc0 with addr=10.0.0.2, port=4420 00:23:10.061 [2024-04-24 20:54:34.599615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c80cc0 is same with the state(5) to be set 00:23:10.061 [2024-04-24 20:54:34.599626] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c80cc0 (9): Bad file descriptor 00:23:10.061 [2024-04-24 20:54:34.599636] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:10.061 [2024-04-24 20:54:34.599642] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:10.061 [2024-04-24 20:54:34.599649] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:10.061 [2024-04-24 20:54:34.599660] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.061 [2024-04-24 20:54:34.608945] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:10.061 [2024-04-24 20:54:34.609258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.061 [2024-04-24 20:54:34.609557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:10.061 [2024-04-24 20:54:34.609568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c80cc0 with addr=10.0.0.2, port=4420 00:23:10.061 [2024-04-24 20:54:34.609575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c80cc0 is same with the state(5) to be set 00:23:10.061 [2024-04-24 20:54:34.609586] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c80cc0 (9): Bad file descriptor 00:23:10.061 [2024-04-24 20:54:34.609596] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:10.061 [2024-04-24 20:54:34.609602] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:10.061 [2024-04-24 20:54:34.609609] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:10.061 [2024-04-24 20:54:34.609619] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:10.061 [2024-04-24 20:54:34.615749] bdev_nvme.c:6706:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:10.061 [2024-04-24 20:54:34.615767] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:10.061 20:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.061 20:54:34 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:10.061 20:54:34 -- common/autotest_common.sh@904 -- # return 0 00:23:10.061 20:54:34 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:10.061 20:54:34 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:10.061 20:54:34 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.061 20:54:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.061 20:54:34 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:10.061 20:54:34 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:23:10.061 20:54:34 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:10.061 20:54:34 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:10.061 20:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.061 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.061 20:54:34 -- host/discovery.sh@63 -- # sort -n 00:23:10.061 20:54:34 -- host/discovery.sh@63 -- # xargs 00:23:10.061 20:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.061 20:54:34 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:23:10.061 20:54:34 -- common/autotest_common.sh@904 -- # return 0 00:23:10.061 20:54:34 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:10.322 20:54:34 -- host/discovery.sh@79 -- # expected_count=0 00:23:10.322 20:54:34 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:10.322 20:54:34 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:10.322 20:54:34 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.322 20:54:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.322 20:54:34 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:10.322 20:54:34 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:10.322 20:54:34 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:10.322 20:54:34 -- host/discovery.sh@74 -- # jq '. | length' 00:23:10.322 20:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.322 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.322 20:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.322 20:54:34 -- host/discovery.sh@74 -- # notification_count=0 00:23:10.322 20:54:34 -- host/discovery.sh@75 -- # notify_id=2 00:23:10.322 20:54:34 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:10.322 20:54:34 -- common/autotest_common.sh@904 -- # return 0 00:23:10.322 20:54:34 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:10.322 20:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.322 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.322 20:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.322 20:54:34 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:10.322 20:54:34 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:10.322 20:54:34 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.322 20:54:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.322 20:54:34 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:10.322 20:54:34 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:23:10.322 20:54:34 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:10.322 20:54:34 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:10.322 20:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.322 20:54:34 -- host/discovery.sh@59 -- # sort 00:23:10.322 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.322 20:54:34 -- host/discovery.sh@59 -- # xargs 00:23:10.322 20:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.322 20:54:34 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:10.322 20:54:34 -- common/autotest_common.sh@904 -- # return 0 00:23:10.322 20:54:34 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:10.322 20:54:34 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:10.322 20:54:34 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.322 20:54:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.322 20:54:34 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:10.322 20:54:34 -- common/autotest_common.sh@903 -- # get_bdev_list 00:23:10.322 20:54:34 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.322 20:54:34 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:10.322 20:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.322 20:54:34 -- host/discovery.sh@55 -- # sort 00:23:10.322 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.322 20:54:34 -- host/discovery.sh@55 -- # xargs 00:23:10.322 20:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.322 20:54:34 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:23:10.322 20:54:34 -- common/autotest_common.sh@904 -- # return 0 00:23:10.322 20:54:34 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:10.322 20:54:34 -- host/discovery.sh@79 -- # expected_count=2 00:23:10.322 20:54:34 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:10.322 20:54:34 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:10.322 20:54:34 -- common/autotest_common.sh@901 -- # local max=10 00:23:10.322 20:54:34 -- common/autotest_common.sh@902 -- # (( max-- )) 00:23:10.322 20:54:34 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:10.322 20:54:34 -- common/autotest_common.sh@903 -- # get_notification_count 00:23:10.322 20:54:34 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:10.322 20:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.322 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:23:10.322 20:54:34 -- host/discovery.sh@74 -- # jq '. | length' 00:23:10.322 20:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.322 20:54:34 -- host/discovery.sh@74 -- # notification_count=2 00:23:10.322 20:54:34 -- host/discovery.sh@75 -- # notify_id=4 00:23:10.322 20:54:34 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:23:10.322 20:54:34 -- common/autotest_common.sh@904 -- # return 0 00:23:10.322 20:54:34 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:10.322 20:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.322 20:54:34 -- common/autotest_common.sh@10 -- # set +x 00:23:11.707 [2024-04-24 20:54:35.984917] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:11.707 [2024-04-24 20:54:35.984933] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:11.707 [2024-04-24 20:54:35.984945] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:11.707 [2024-04-24 20:54:36.073214] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:11.707 [2024-04-24 20:54:36.137871] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:11.707 [2024-04-24 20:54:36.137901] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:11.707 20:54:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.707 20:54:36 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:11.707 20:54:36 -- common/autotest_common.sh@638 -- # local es=0 00:23:11.707 20:54:36 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:11.707 20:54:36 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:11.707 20:54:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:11.707 20:54:36 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:11.707 20:54:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:11.707 20:54:36 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:11.707 20:54:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.707 20:54:36 -- common/autotest_common.sh@10 -- # set +x 00:23:11.707 request: 00:23:11.707 { 00:23:11.707 "name": "nvme", 00:23:11.707 "trtype": "tcp", 00:23:11.707 "traddr": "10.0.0.2", 00:23:11.707 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:11.707 "adrfam": "ipv4", 00:23:11.707 "trsvcid": "8009", 00:23:11.707 "wait_for_attach": true, 00:23:11.707 "method": "bdev_nvme_start_discovery", 00:23:11.707 "req_id": 1 00:23:11.707 } 00:23:11.707 Got JSON-RPC error response 00:23:11.707 response: 00:23:11.707 { 00:23:11.707 "code": -17, 00:23:11.707 "message": "File exists" 00:23:11.707 } 00:23:11.707 20:54:36 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:11.707 20:54:36 -- common/autotest_common.sh@641 -- # es=1 00:23:11.707 20:54:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:11.707 20:54:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:11.707 20:54:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:11.707 20:54:36 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:11.707 20:54:36 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:11.707 20:54:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.707 20:54:36 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:11.707 20:54:36 -- common/autotest_common.sh@10 -- # set +x 00:23:11.707 20:54:36 -- host/discovery.sh@67 -- # sort 00:23:11.707 20:54:36 -- host/discovery.sh@67 -- # xargs 00:23:11.707 20:54:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.707 20:54:36 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:11.707 20:54:36 -- host/discovery.sh@146 -- # get_bdev_list 00:23:11.707 20:54:36 -- host/discovery.sh@55 -- # xargs 00:23:11.707 20:54:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.707 20:54:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:11.707 20:54:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.707 20:54:36 -- host/discovery.sh@55 -- # sort 00:23:11.707 20:54:36 -- common/autotest_common.sh@10 -- # set +x 00:23:11.707 20:54:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.707 20:54:36 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:11.707 20:54:36 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:11.707 20:54:36 -- common/autotest_common.sh@638 -- # local es=0 00:23:11.707 20:54:36 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:11.707 20:54:36 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:11.707 20:54:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:11.707 20:54:36 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:11.707 20:54:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:11.707 20:54:36 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:11.707 20:54:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.707 20:54:36 -- common/autotest_common.sh@10 -- # set +x 00:23:11.707 request: 00:23:11.707 { 00:23:11.707 "name": "nvme_second", 00:23:11.707 "trtype": "tcp", 00:23:11.707 "traddr": "10.0.0.2", 00:23:11.707 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:11.707 "adrfam": "ipv4", 00:23:11.707 "trsvcid": "8009", 00:23:11.707 "wait_for_attach": true, 00:23:11.707 "method": "bdev_nvme_start_discovery", 00:23:11.707 "req_id": 1 00:23:11.707 } 00:23:11.707 Got JSON-RPC error response 00:23:11.707 response: 00:23:11.707 { 00:23:11.707 "code": -17, 00:23:11.707 "message": "File exists" 00:23:11.707 } 00:23:11.707 20:54:36 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:11.707 20:54:36 -- common/autotest_common.sh@641 -- # es=1 00:23:11.707 20:54:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:11.707 20:54:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:11.707 20:54:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:11.707 20:54:36 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:11.707 20:54:36 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:11.707 20:54:36 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:11.707 20:54:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.707 20:54:36 -- host/discovery.sh@67 -- # xargs 00:23:11.707 20:54:36 -- common/autotest_common.sh@10 -- # set +x 00:23:11.707 20:54:36 -- host/discovery.sh@67 -- # sort 00:23:11.707 20:54:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.707 20:54:36 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:11.707 20:54:36 -- host/discovery.sh@152 -- # get_bdev_list 00:23:11.707 20:54:36 -- host/discovery.sh@55 -- # xargs 00:23:11.707 20:54:36 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.707 20:54:36 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:11.707 20:54:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.707 20:54:36 -- host/discovery.sh@55 -- # sort 00:23:11.707 20:54:36 -- common/autotest_common.sh@10 -- # set +x 00:23:11.969 20:54:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.969 20:54:36 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:11.969 20:54:36 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:11.969 20:54:36 -- common/autotest_common.sh@638 -- # local es=0 00:23:11.969 20:54:36 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:11.969 20:54:36 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:23:11.969 20:54:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:11.969 20:54:36 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:23:11.969 20:54:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:23:11.969 20:54:36 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:11.969 20:54:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.969 20:54:36 -- common/autotest_common.sh@10 -- # set +x 00:23:12.910 [2024-04-24 20:54:37.401386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.910 [2024-04-24 20:54:37.401709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:12.910 [2024-04-24 20:54:37.401723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9bb70 with addr=10.0.0.2, port=8010 00:23:12.910 [2024-04-24 20:54:37.401740] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:12.910 [2024-04-24 20:54:37.401747] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:12.910 [2024-04-24 20:54:37.401754] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:13.852 [2024-04-24 20:54:38.403658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.852 [2024-04-24 20:54:38.403968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:13.852 [2024-04-24 20:54:38.403980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c9bb70 with addr=10.0.0.2, port=8010 00:23:13.852 [2024-04-24 20:54:38.403993] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:13.852 [2024-04-24 20:54:38.404001] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:13.852 [2024-04-24 20:54:38.404008] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:14.793 [2024-04-24 20:54:39.405717] bdev_nvme.c:6962:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:14.793 request: 00:23:14.793 { 00:23:14.793 "name": "nvme_second", 00:23:14.793 "trtype": "tcp", 00:23:14.793 "traddr": "10.0.0.2", 00:23:14.793 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:14.793 "adrfam": "ipv4", 00:23:14.793 "trsvcid": "8010", 00:23:14.793 "attach_timeout_ms": 3000, 00:23:14.793 "method": "bdev_nvme_start_discovery", 00:23:14.793 "req_id": 1 00:23:14.793 } 00:23:14.793 Got JSON-RPC error response 00:23:14.793 response: 00:23:14.793 { 00:23:14.793 "code": -110, 00:23:14.793 "message": "Connection timed out" 00:23:14.793 } 00:23:14.793 20:54:39 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:14.793 20:54:39 -- common/autotest_common.sh@641 -- # es=1 00:23:14.793 20:54:39 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:23:14.793 20:54:39 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:23:14.793 20:54:39 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:23:14.793 20:54:39 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:14.793 20:54:39 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:14.793 20:54:39 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:14.793 20:54:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:14.793 20:54:39 -- host/discovery.sh@67 -- # sort 00:23:14.793 20:54:39 -- common/autotest_common.sh@10 -- # set +x 00:23:14.793 20:54:39 -- host/discovery.sh@67 -- # xargs 00:23:14.793 20:54:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.053 20:54:39 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:15.053 20:54:39 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:15.053 20:54:39 -- host/discovery.sh@161 -- # kill 2880506 00:23:15.054 20:54:39 -- host/discovery.sh@162 -- # nvmftestfini 00:23:15.054 20:54:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:15.054 20:54:39 -- nvmf/common.sh@117 -- # sync 00:23:15.054 20:54:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.054 20:54:39 -- nvmf/common.sh@120 -- # set +e 00:23:15.054 20:54:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.054 20:54:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.054 rmmod nvme_tcp 00:23:15.054 rmmod nvme_fabrics 00:23:15.054 rmmod nvme_keyring 00:23:15.054 20:54:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.054 20:54:39 -- nvmf/common.sh@124 -- # set -e 00:23:15.054 20:54:39 -- nvmf/common.sh@125 -- # return 0 00:23:15.054 20:54:39 -- nvmf/common.sh@478 -- # '[' -n 2880325 ']' 00:23:15.054 20:54:39 -- nvmf/common.sh@479 -- # killprocess 2880325 00:23:15.054 20:54:39 -- common/autotest_common.sh@936 -- # '[' -z 2880325 ']' 00:23:15.054 20:54:39 -- common/autotest_common.sh@940 -- # kill -0 2880325 00:23:15.054 20:54:39 -- common/autotest_common.sh@941 -- # uname 00:23:15.054 20:54:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:15.054 20:54:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2880325 00:23:15.054 20:54:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:15.054 20:54:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:15.054 20:54:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2880325' 00:23:15.054 killing process with pid 2880325 00:23:15.054 20:54:39 -- common/autotest_common.sh@955 -- # kill 2880325 00:23:15.054 20:54:39 -- common/autotest_common.sh@960 -- # wait 2880325 00:23:15.314 20:54:39 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:15.314 20:54:39 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:15.314 20:54:39 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:15.314 20:54:39 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:15.314 20:54:39 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:15.314 20:54:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.314 20:54:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:15.314 20:54:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.227 20:54:41 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:17.227 00:23:17.227 real 0m19.225s 00:23:17.227 user 0m21.891s 00:23:17.227 sys 0m6.814s 00:23:17.227 20:54:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:17.227 20:54:41 -- common/autotest_common.sh@10 -- # set +x 00:23:17.227 ************************************ 00:23:17.227 END TEST nvmf_discovery 00:23:17.227 ************************************ 00:23:17.227 20:54:41 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:17.227 20:54:41 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:17.227 20:54:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:17.228 20:54:41 -- common/autotest_common.sh@10 -- # set +x 00:23:17.488 ************************************ 00:23:17.488 START TEST nvmf_discovery_remove_ifc 00:23:17.488 ************************************ 00:23:17.488 20:54:41 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:17.488 * Looking for test storage... 00:23:17.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:17.489 20:54:42 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.489 20:54:42 -- nvmf/common.sh@7 -- # uname -s 00:23:17.489 20:54:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.489 20:54:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.489 20:54:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.489 20:54:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.489 20:54:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.489 20:54:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.489 20:54:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.489 20:54:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.489 20:54:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.489 20:54:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.489 20:54:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:17.489 20:54:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:17.489 20:54:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.489 20:54:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.489 20:54:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.489 20:54:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.489 20:54:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.489 20:54:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.489 20:54:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.489 20:54:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.489 20:54:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.489 20:54:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.489 20:54:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.489 20:54:42 -- paths/export.sh@5 -- # export PATH 00:23:17.489 20:54:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.489 20:54:42 -- nvmf/common.sh@47 -- # : 0 00:23:17.489 20:54:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:17.489 20:54:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:17.489 20:54:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.489 20:54:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.489 20:54:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.489 20:54:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:17.489 20:54:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:17.489 20:54:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:17.750 20:54:42 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:17.750 20:54:42 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:17.750 20:54:42 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:17.750 20:54:42 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:17.750 20:54:42 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:17.750 20:54:42 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:17.750 20:54:42 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:17.750 20:54:42 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:17.750 20:54:42 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.750 20:54:42 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:17.750 20:54:42 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:17.750 20:54:42 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:17.750 20:54:42 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.750 20:54:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.750 20:54:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.750 20:54:42 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:17.750 20:54:42 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:17.750 20:54:42 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.750 20:54:42 -- common/autotest_common.sh@10 -- # set +x 00:23:25.927 20:54:49 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:25.927 20:54:49 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.927 20:54:49 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.927 20:54:49 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.927 20:54:49 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.927 20:54:49 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.927 20:54:49 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.927 20:54:49 -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.927 20:54:49 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.927 20:54:49 -- nvmf/common.sh@296 -- # e810=() 00:23:25.927 20:54:49 -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.927 20:54:49 -- nvmf/common.sh@297 -- # x722=() 00:23:25.927 20:54:49 -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.927 20:54:49 -- nvmf/common.sh@298 -- # mlx=() 00:23:25.927 20:54:49 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.927 20:54:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.927 20:54:49 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.927 20:54:49 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.927 20:54:49 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.927 20:54:49 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.927 20:54:49 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.927 20:54:49 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.927 20:54:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.927 20:54:49 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.927 20:54:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.927 20:54:49 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.927 20:54:49 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.927 20:54:49 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:25.927 20:54:49 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.927 20:54:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.927 20:54:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:25.927 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:25.927 20:54:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.927 20:54:49 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:25.927 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:25.927 20:54:49 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.927 20:54:49 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.927 20:54:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.927 20:54:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:25.927 20:54:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.927 20:54:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:25.927 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:25.927 20:54:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.927 20:54:49 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.927 20:54:49 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.927 20:54:49 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:25.927 20:54:49 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.927 20:54:49 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:25.927 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:25.927 20:54:49 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.927 20:54:49 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:25.927 20:54:49 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:25.927 20:54:49 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:25.927 20:54:49 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:25.927 20:54:49 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.927 20:54:49 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.927 20:54:49 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.927 20:54:49 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.927 20:54:49 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.927 20:54:49 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.927 20:54:49 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.927 20:54:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.927 20:54:49 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.927 20:54:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.927 20:54:49 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.927 20:54:49 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.927 20:54:49 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.927 20:54:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.927 20:54:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.927 20:54:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.927 20:54:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.927 20:54:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.927 20:54:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.928 20:54:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.480 ms 00:23:25.928 00:23:25.928 --- 10.0.0.2 ping statistics --- 00:23:25.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.928 rtt min/avg/max/mdev = 0.480/0.480/0.480/0.000 ms 00:23:25.928 20:54:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:23:25.928 00:23:25.928 --- 10.0.0.1 ping statistics --- 00:23:25.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.928 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:23:25.928 20:54:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.928 20:54:49 -- nvmf/common.sh@411 -- # return 0 00:23:25.928 20:54:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:25.928 20:54:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.928 20:54:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:25.928 20:54:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:25.928 20:54:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.928 20:54:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:25.928 20:54:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:25.928 20:54:49 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:25.928 20:54:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:25.928 20:54:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:25.928 20:54:49 -- common/autotest_common.sh@10 -- # set +x 00:23:25.928 20:54:49 -- nvmf/common.sh@470 -- # nvmfpid=2886471 00:23:25.928 20:54:49 -- nvmf/common.sh@471 -- # waitforlisten 2886471 00:23:25.928 20:54:49 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:25.928 20:54:49 -- common/autotest_common.sh@817 -- # '[' -z 2886471 ']' 00:23:25.928 20:54:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.928 20:54:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:25.928 20:54:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.928 20:54:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:25.928 20:54:49 -- common/autotest_common.sh@10 -- # set +x 00:23:25.928 [2024-04-24 20:54:49.515378] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:23:25.928 [2024-04-24 20:54:49.515440] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.928 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.928 [2024-04-24 20:54:49.584977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.928 [2024-04-24 20:54:49.656539] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.928 [2024-04-24 20:54:49.656574] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.928 [2024-04-24 20:54:49.656582] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.928 [2024-04-24 20:54:49.656588] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.928 [2024-04-24 20:54:49.656594] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.928 [2024-04-24 20:54:49.656617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.928 20:54:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:25.928 20:54:49 -- common/autotest_common.sh@850 -- # return 0 00:23:25.928 20:54:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:25.928 20:54:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:25.928 20:54:49 -- common/autotest_common.sh@10 -- # set +x 00:23:25.928 20:54:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.928 20:54:49 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:25.928 20:54:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.928 20:54:49 -- common/autotest_common.sh@10 -- # set +x 00:23:25.928 [2024-04-24 20:54:49.798064] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.928 [2024-04-24 20:54:49.806222] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:25.928 null0 00:23:25.928 [2024-04-24 20:54:49.838226] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.928 20:54:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.928 20:54:49 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2886673 00:23:25.928 20:54:49 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2886673 /tmp/host.sock 00:23:25.928 20:54:49 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:25.928 20:54:49 -- common/autotest_common.sh@817 -- # '[' -z 2886673 ']' 00:23:25.928 20:54:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:25.928 20:54:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:25.928 20:54:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:25.928 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:25.928 20:54:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:25.928 20:54:49 -- common/autotest_common.sh@10 -- # set +x 00:23:25.928 [2024-04-24 20:54:49.907472] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:23:25.928 [2024-04-24 20:54:49.907518] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2886673 ] 00:23:25.928 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.928 [2024-04-24 20:54:49.981214] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.928 [2024-04-24 20:54:50.047249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:25.928 20:54:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:25.928 20:54:50 -- common/autotest_common.sh@850 -- # return 0 00:23:25.928 20:54:50 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.928 20:54:50 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:25.928 20:54:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.928 20:54:50 -- common/autotest_common.sh@10 -- # set +x 00:23:25.928 20:54:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.928 20:54:50 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:25.928 20:54:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.928 20:54:50 -- common/autotest_common.sh@10 -- # set +x 00:23:25.928 20:54:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:25.928 20:54:50 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:25.928 20:54:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:25.928 20:54:50 -- common/autotest_common.sh@10 -- # set +x 00:23:26.872 [2024-04-24 20:54:51.204824] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:26.872 [2024-04-24 20:54:51.204845] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:26.872 [2024-04-24 20:54:51.204859] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:26.872 [2024-04-24 20:54:51.293126] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:26.872 [2024-04-24 20:54:51.397429] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:26.872 [2024-04-24 20:54:51.397477] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:26.872 [2024-04-24 20:54:51.397498] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:26.872 [2024-04-24 20:54:51.397516] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:26.872 [2024-04-24 20:54:51.397537] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:26.872 20:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.872 20:54:51 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:26.872 20:54:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:26.872 20:54:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:26.872 [2024-04-24 20:54:51.404036] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1494510 was disconnected and freed. delete nvme_qpair. 00:23:26.872 20:54:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:26.872 20:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:26.872 20:54:51 -- common/autotest_common.sh@10 -- # set +x 00:23:26.872 20:54:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:26.872 20:54:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:26.872 20:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:26.872 20:54:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:26.872 20:54:51 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:26.872 20:54:51 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:27.134 20:54:51 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:27.134 20:54:51 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:27.134 20:54:51 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:27.134 20:54:51 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:27.134 20:54:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:27.134 20:54:51 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:27.134 20:54:51 -- common/autotest_common.sh@10 -- # set +x 00:23:27.134 20:54:51 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:27.134 20:54:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:27.134 20:54:51 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:27.134 20:54:51 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:28.077 20:54:52 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:28.077 20:54:52 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:28.077 20:54:52 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:28.077 20:54:52 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:28.077 20:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:28.077 20:54:52 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:28.077 20:54:52 -- common/autotest_common.sh@10 -- # set +x 00:23:28.077 20:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:28.077 20:54:52 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:28.077 20:54:52 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:29.463 20:54:53 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:29.463 20:54:53 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:29.463 20:54:53 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:29.463 20:54:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:29.463 20:54:53 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:29.463 20:54:53 -- common/autotest_common.sh@10 -- # set +x 00:23:29.463 20:54:53 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:29.463 20:54:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:29.463 20:54:53 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:29.463 20:54:53 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:30.406 20:54:54 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:30.406 20:54:54 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:30.406 20:54:54 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:30.406 20:54:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:30.406 20:54:54 -- common/autotest_common.sh@10 -- # set +x 00:23:30.406 20:54:54 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:30.406 20:54:54 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:30.406 20:54:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:30.406 20:54:54 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:30.406 20:54:54 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:31.348 20:54:55 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:31.348 20:54:55 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:31.349 20:54:55 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:31.349 20:54:55 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:31.349 20:54:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:31.349 20:54:55 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:31.349 20:54:55 -- common/autotest_common.sh@10 -- # set +x 00:23:31.349 20:54:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:31.349 20:54:55 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:31.349 20:54:55 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:32.292 [2024-04-24 20:54:56.838105] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:32.292 [2024-04-24 20:54:56.838145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.292 [2024-04-24 20:54:56.838156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.292 [2024-04-24 20:54:56.838166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.292 [2024-04-24 20:54:56.838174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.292 [2024-04-24 20:54:56.838182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.292 [2024-04-24 20:54:56.838191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.292 [2024-04-24 20:54:56.838199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.292 [2024-04-24 20:54:56.838210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.292 [2024-04-24 20:54:56.838218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:32.292 [2024-04-24 20:54:56.838226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:32.292 [2024-04-24 20:54:56.838233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a9e0 is same with the state(5) to be set 00:23:32.292 [2024-04-24 20:54:56.848128] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145a9e0 (9): Bad file descriptor 00:23:32.292 [2024-04-24 20:54:56.858166] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:32.292 20:54:56 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:32.292 20:54:56 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:32.292 20:54:56 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:32.292 20:54:56 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:32.292 20:54:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.292 20:54:56 -- common/autotest_common.sh@10 -- # set +x 00:23:32.292 20:54:56 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:33.235 [2024-04-24 20:54:57.874434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:34.623 [2024-04-24 20:54:58.897810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:34.623 [2024-04-24 20:54:58.897905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x145a9e0 with addr=10.0.0.2, port=4420 00:23:34.623 [2024-04-24 20:54:58.897938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145a9e0 is same with the state(5) to be set 00:23:34.623 [2024-04-24 20:54:58.898995] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145a9e0 (9): Bad file descriptor 00:23:34.623 [2024-04-24 20:54:58.899061] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:34.623 [2024-04-24 20:54:58.899109] bdev_nvme.c:6670:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:34.623 [2024-04-24 20:54:58.899165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.623 [2024-04-24 20:54:58.899194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.623 [2024-04-24 20:54:58.899223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.623 [2024-04-24 20:54:58.899245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.623 [2024-04-24 20:54:58.899268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.623 [2024-04-24 20:54:58.899289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.623 [2024-04-24 20:54:58.899312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.623 [2024-04-24 20:54:58.899334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.623 [2024-04-24 20:54:58.899359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:34.623 [2024-04-24 20:54:58.899380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:34.623 [2024-04-24 20:54:58.899401] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:34.623 [2024-04-24 20:54:58.899444] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x145adf0 (9): Bad file descriptor 00:23:34.623 [2024-04-24 20:54:58.900092] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:34.623 [2024-04-24 20:54:58.900124] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:34.623 20:54:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:34.623 20:54:58 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:34.623 20:54:58 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:35.566 20:54:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:35.566 20:54:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.566 20:54:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:35.566 20:54:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.566 20:54:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:35.566 20:54:59 -- common/autotest_common.sh@10 -- # set +x 00:23:35.566 20:54:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:35.566 20:54:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.566 20:54:59 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:35.566 20:54:59 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.566 20:54:59 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.566 20:55:00 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:35.566 20:55:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:35.566 20:55:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:35.566 20:55:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:35.566 20:55:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:35.566 20:55:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:35.566 20:55:00 -- common/autotest_common.sh@10 -- # set +x 00:23:35.566 20:55:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:35.566 20:55:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:35.566 20:55:00 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:35.566 20:55:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:36.511 [2024-04-24 20:55:00.951680] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:36.511 [2024-04-24 20:55:00.951702] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:36.511 [2024-04-24 20:55:00.951717] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:36.511 [2024-04-24 20:55:01.082132] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:36.511 20:55:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:36.511 20:55:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:36.511 20:55:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:36.511 20:55:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:36.511 20:55:01 -- common/autotest_common.sh@10 -- # set +x 00:23:36.511 20:55:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:36.511 20:55:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:36.772 20:55:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:36.772 20:55:01 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:36.772 20:55:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:36.772 [2024-04-24 20:55:01.262137] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:36.772 [2024-04-24 20:55:01.262177] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:36.772 [2024-04-24 20:55:01.262197] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:36.772 [2024-04-24 20:55:01.262212] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:36.772 [2024-04-24 20:55:01.262220] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:36.772 [2024-04-24 20:55:01.268939] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x14682f0 was disconnected and freed. delete nvme_qpair. 00:23:37.717 20:55:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:37.717 20:55:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:37.717 20:55:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:37.717 20:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:37.717 20:55:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:37.717 20:55:02 -- common/autotest_common.sh@10 -- # set +x 00:23:37.717 20:55:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:37.717 20:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:37.717 20:55:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:37.717 20:55:02 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:37.717 20:55:02 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2886673 00:23:37.717 20:55:02 -- common/autotest_common.sh@936 -- # '[' -z 2886673 ']' 00:23:37.717 20:55:02 -- common/autotest_common.sh@940 -- # kill -0 2886673 00:23:37.717 20:55:02 -- common/autotest_common.sh@941 -- # uname 00:23:37.717 20:55:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:37.717 20:55:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2886673 00:23:37.717 20:55:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:37.717 20:55:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:37.717 20:55:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2886673' 00:23:37.717 killing process with pid 2886673 00:23:37.717 20:55:02 -- common/autotest_common.sh@955 -- # kill 2886673 00:23:37.717 20:55:02 -- common/autotest_common.sh@960 -- # wait 2886673 00:23:37.978 20:55:02 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:37.978 20:55:02 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:37.978 20:55:02 -- nvmf/common.sh@117 -- # sync 00:23:37.978 20:55:02 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:37.978 20:55:02 -- nvmf/common.sh@120 -- # set +e 00:23:37.978 20:55:02 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:37.978 20:55:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:37.978 rmmod nvme_tcp 00:23:37.978 rmmod nvme_fabrics 00:23:37.978 rmmod nvme_keyring 00:23:37.978 20:55:02 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:37.978 20:55:02 -- nvmf/common.sh@124 -- # set -e 00:23:37.978 20:55:02 -- nvmf/common.sh@125 -- # return 0 00:23:37.978 20:55:02 -- nvmf/common.sh@478 -- # '[' -n 2886471 ']' 00:23:37.978 20:55:02 -- nvmf/common.sh@479 -- # killprocess 2886471 00:23:37.978 20:55:02 -- common/autotest_common.sh@936 -- # '[' -z 2886471 ']' 00:23:37.979 20:55:02 -- common/autotest_common.sh@940 -- # kill -0 2886471 00:23:37.979 20:55:02 -- common/autotest_common.sh@941 -- # uname 00:23:37.979 20:55:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:37.979 20:55:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2886471 00:23:37.979 20:55:02 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:37.979 20:55:02 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:37.979 20:55:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2886471' 00:23:37.979 killing process with pid 2886471 00:23:37.979 20:55:02 -- common/autotest_common.sh@955 -- # kill 2886471 00:23:37.979 20:55:02 -- common/autotest_common.sh@960 -- # wait 2886471 00:23:38.240 20:55:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:38.240 20:55:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:38.240 20:55:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:38.240 20:55:02 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:38.240 20:55:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:38.240 20:55:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.240 20:55:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:38.240 20:55:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.172 20:55:04 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:40.172 00:23:40.172 real 0m22.790s 00:23:40.172 user 0m26.178s 00:23:40.172 sys 0m6.641s 00:23:40.172 20:55:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:40.172 20:55:04 -- common/autotest_common.sh@10 -- # set +x 00:23:40.172 ************************************ 00:23:40.172 END TEST nvmf_discovery_remove_ifc 00:23:40.172 ************************************ 00:23:40.433 20:55:04 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:40.433 20:55:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:40.433 20:55:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:40.433 20:55:04 -- common/autotest_common.sh@10 -- # set +x 00:23:40.433 ************************************ 00:23:40.433 START TEST nvmf_identify_kernel_target 00:23:40.433 ************************************ 00:23:40.433 20:55:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:40.433 * Looking for test storage... 00:23:40.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:40.694 20:55:05 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:40.694 20:55:05 -- nvmf/common.sh@7 -- # uname -s 00:23:40.694 20:55:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:40.694 20:55:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:40.694 20:55:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:40.694 20:55:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:40.694 20:55:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:40.694 20:55:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:40.694 20:55:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:40.694 20:55:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:40.695 20:55:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:40.695 20:55:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:40.695 20:55:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:40.695 20:55:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:40.695 20:55:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:40.695 20:55:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:40.695 20:55:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:40.695 20:55:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:40.695 20:55:05 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:40.695 20:55:05 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:40.695 20:55:05 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:40.695 20:55:05 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:40.695 20:55:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.695 20:55:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.695 20:55:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.695 20:55:05 -- paths/export.sh@5 -- # export PATH 00:23:40.695 20:55:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:40.695 20:55:05 -- nvmf/common.sh@47 -- # : 0 00:23:40.695 20:55:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:40.695 20:55:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:40.695 20:55:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:40.695 20:55:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:40.695 20:55:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:40.695 20:55:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:40.695 20:55:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:40.695 20:55:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:40.695 20:55:05 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:40.695 20:55:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:40.695 20:55:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:40.695 20:55:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:40.695 20:55:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:40.695 20:55:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:40.695 20:55:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.695 20:55:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.695 20:55:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:40.695 20:55:05 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:40.695 20:55:05 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:40.695 20:55:05 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:40.695 20:55:05 -- common/autotest_common.sh@10 -- # set +x 00:23:48.835 20:55:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:48.835 20:55:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:48.835 20:55:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:48.835 20:55:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:48.835 20:55:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:48.835 20:55:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:48.835 20:55:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:48.835 20:55:11 -- nvmf/common.sh@295 -- # net_devs=() 00:23:48.835 20:55:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:48.835 20:55:11 -- nvmf/common.sh@296 -- # e810=() 00:23:48.835 20:55:11 -- nvmf/common.sh@296 -- # local -ga e810 00:23:48.835 20:55:11 -- nvmf/common.sh@297 -- # x722=() 00:23:48.835 20:55:11 -- nvmf/common.sh@297 -- # local -ga x722 00:23:48.835 20:55:11 -- nvmf/common.sh@298 -- # mlx=() 00:23:48.835 20:55:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:48.835 20:55:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.835 20:55:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.835 20:55:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.835 20:55:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.835 20:55:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.835 20:55:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.835 20:55:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.835 20:55:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.835 20:55:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.835 20:55:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.835 20:55:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.835 20:55:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:48.835 20:55:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:48.835 20:55:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:48.835 20:55:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.835 20:55:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:48.835 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:48.835 20:55:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.835 20:55:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:48.835 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:48.835 20:55:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:48.835 20:55:11 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.835 20:55:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.835 20:55:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:48.835 20:55:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.835 20:55:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:48.835 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:48.835 20:55:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.835 20:55:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.835 20:55:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.835 20:55:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:48.835 20:55:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.835 20:55:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:48.835 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:48.835 20:55:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.835 20:55:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:48.835 20:55:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:48.835 20:55:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:48.835 20:55:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:48.835 20:55:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.835 20:55:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.835 20:55:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.835 20:55:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:48.835 20:55:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.835 20:55:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.835 20:55:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:48.835 20:55:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.835 20:55:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.835 20:55:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:48.835 20:55:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:48.835 20:55:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.835 20:55:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.835 20:55:12 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.835 20:55:12 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.835 20:55:12 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:48.835 20:55:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.835 20:55:12 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.835 20:55:12 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.835 20:55:12 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:48.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:23:48.835 00:23:48.835 --- 10.0.0.2 ping statistics --- 00:23:48.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.835 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:23:48.835 20:55:12 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:23:48.835 00:23:48.835 --- 10.0.0.1 ping statistics --- 00:23:48.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.835 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:23:48.835 20:55:12 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.835 20:55:12 -- nvmf/common.sh@411 -- # return 0 00:23:48.835 20:55:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:48.835 20:55:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.835 20:55:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:48.835 20:55:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:48.835 20:55:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.835 20:55:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:48.835 20:55:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:48.835 20:55:12 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:48.835 20:55:12 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:48.835 20:55:12 -- nvmf/common.sh@717 -- # local ip 00:23:48.835 20:55:12 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:48.835 20:55:12 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:48.835 20:55:12 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.835 20:55:12 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.835 20:55:12 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:48.835 20:55:12 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.835 20:55:12 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:48.835 20:55:12 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:48.835 20:55:12 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:48.835 20:55:12 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:48.835 20:55:12 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:48.835 20:55:12 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:48.835 20:55:12 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:48.835 20:55:12 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:48.835 20:55:12 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:48.835 20:55:12 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:48.835 20:55:12 -- nvmf/common.sh@628 -- # local block nvme 00:23:48.835 20:55:12 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:48.835 20:55:12 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:48.836 20:55:12 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:48.836 20:55:12 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:51.386 Waiting for block devices as requested 00:23:51.386 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:23:51.386 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:23:51.386 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:23:51.386 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:23:51.386 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:23:51.647 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:23:51.647 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:23:51.647 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:23:51.908 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:23:51.908 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:23:52.169 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:23:52.169 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:23:52.169 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:23:52.429 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:23:52.429 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:23:52.429 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:23:52.690 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:23:52.950 20:55:17 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:52.950 20:55:17 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:52.950 20:55:17 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:52.950 20:55:17 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:52.950 20:55:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:52.950 20:55:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:52.950 20:55:17 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:52.950 20:55:17 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:52.950 20:55:17 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:52.950 No valid GPT data, bailing 00:23:52.950 20:55:17 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:52.950 20:55:17 -- scripts/common.sh@391 -- # pt= 00:23:52.950 20:55:17 -- scripts/common.sh@392 -- # return 1 00:23:52.950 20:55:17 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:52.950 20:55:17 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:23:52.950 20:55:17 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:52.950 20:55:17 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:52.950 20:55:17 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:52.950 20:55:17 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:52.950 20:55:17 -- nvmf/common.sh@656 -- # echo 1 00:23:52.950 20:55:17 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:23:52.950 20:55:17 -- nvmf/common.sh@658 -- # echo 1 00:23:52.950 20:55:17 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:52.951 20:55:17 -- nvmf/common.sh@661 -- # echo tcp 00:23:52.951 20:55:17 -- nvmf/common.sh@662 -- # echo 4420 00:23:52.951 20:55:17 -- nvmf/common.sh@663 -- # echo ipv4 00:23:52.951 20:55:17 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:52.951 20:55:17 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.1 -t tcp -s 4420 00:23:52.951 00:23:52.951 Discovery Log Number of Records 2, Generation counter 2 00:23:52.951 =====Discovery Log Entry 0====== 00:23:52.951 trtype: tcp 00:23:52.951 adrfam: ipv4 00:23:52.951 subtype: current discovery subsystem 00:23:52.951 treq: not specified, sq flow control disable supported 00:23:52.951 portid: 1 00:23:52.951 trsvcid: 4420 00:23:52.951 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:52.951 traddr: 10.0.0.1 00:23:52.951 eflags: none 00:23:52.951 sectype: none 00:23:52.951 =====Discovery Log Entry 1====== 00:23:52.951 trtype: tcp 00:23:52.951 adrfam: ipv4 00:23:52.951 subtype: nvme subsystem 00:23:52.951 treq: not specified, sq flow control disable supported 00:23:52.951 portid: 1 00:23:52.951 trsvcid: 4420 00:23:52.951 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:52.951 traddr: 10.0.0.1 00:23:52.951 eflags: none 00:23:52.951 sectype: none 00:23:52.951 20:55:17 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:52.951 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:52.951 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.213 ===================================================== 00:23:53.213 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:53.213 ===================================================== 00:23:53.213 Controller Capabilities/Features 00:23:53.213 ================================ 00:23:53.213 Vendor ID: 0000 00:23:53.213 Subsystem Vendor ID: 0000 00:23:53.213 Serial Number: cc529e1be67b9da8eb01 00:23:53.213 Model Number: Linux 00:23:53.214 Firmware Version: 6.7.0-68 00:23:53.214 Recommended Arb Burst: 0 00:23:53.214 IEEE OUI Identifier: 00 00 00 00:23:53.214 Multi-path I/O 00:23:53.214 May have multiple subsystem ports: No 00:23:53.214 May have multiple controllers: No 00:23:53.214 Associated with SR-IOV VF: No 00:23:53.214 Max Data Transfer Size: Unlimited 00:23:53.214 Max Number of Namespaces: 0 00:23:53.214 Max Number of I/O Queues: 1024 00:23:53.214 NVMe Specification Version (VS): 1.3 00:23:53.214 NVMe Specification Version (Identify): 1.3 00:23:53.214 Maximum Queue Entries: 1024 00:23:53.214 Contiguous Queues Required: No 00:23:53.214 Arbitration Mechanisms Supported 00:23:53.214 Weighted Round Robin: Not Supported 00:23:53.214 Vendor Specific: Not Supported 00:23:53.214 Reset Timeout: 7500 ms 00:23:53.214 Doorbell Stride: 4 bytes 00:23:53.214 NVM Subsystem Reset: Not Supported 00:23:53.214 Command Sets Supported 00:23:53.214 NVM Command Set: Supported 00:23:53.214 Boot Partition: Not Supported 00:23:53.214 Memory Page Size Minimum: 4096 bytes 00:23:53.214 Memory Page Size Maximum: 4096 bytes 00:23:53.214 Persistent Memory Region: Not Supported 00:23:53.214 Optional Asynchronous Events Supported 00:23:53.214 Namespace Attribute Notices: Not Supported 00:23:53.214 Firmware Activation Notices: Not Supported 00:23:53.214 ANA Change Notices: Not Supported 00:23:53.214 PLE Aggregate Log Change Notices: Not Supported 00:23:53.214 LBA Status Info Alert Notices: Not Supported 00:23:53.214 EGE Aggregate Log Change Notices: Not Supported 00:23:53.214 Normal NVM Subsystem Shutdown event: Not Supported 00:23:53.214 Zone Descriptor Change Notices: Not Supported 00:23:53.214 Discovery Log Change Notices: Supported 00:23:53.214 Controller Attributes 00:23:53.214 128-bit Host Identifier: Not Supported 00:23:53.214 Non-Operational Permissive Mode: Not Supported 00:23:53.214 NVM Sets: Not Supported 00:23:53.214 Read Recovery Levels: Not Supported 00:23:53.214 Endurance Groups: Not Supported 00:23:53.214 Predictable Latency Mode: Not Supported 00:23:53.214 Traffic Based Keep ALive: Not Supported 00:23:53.214 Namespace Granularity: Not Supported 00:23:53.214 SQ Associations: Not Supported 00:23:53.214 UUID List: Not Supported 00:23:53.214 Multi-Domain Subsystem: Not Supported 00:23:53.214 Fixed Capacity Management: Not Supported 00:23:53.214 Variable Capacity Management: Not Supported 00:23:53.214 Delete Endurance Group: Not Supported 00:23:53.214 Delete NVM Set: Not Supported 00:23:53.214 Extended LBA Formats Supported: Not Supported 00:23:53.214 Flexible Data Placement Supported: Not Supported 00:23:53.214 00:23:53.214 Controller Memory Buffer Support 00:23:53.214 ================================ 00:23:53.214 Supported: No 00:23:53.214 00:23:53.214 Persistent Memory Region Support 00:23:53.214 ================================ 00:23:53.214 Supported: No 00:23:53.214 00:23:53.214 Admin Command Set Attributes 00:23:53.214 ============================ 00:23:53.214 Security Send/Receive: Not Supported 00:23:53.214 Format NVM: Not Supported 00:23:53.214 Firmware Activate/Download: Not Supported 00:23:53.214 Namespace Management: Not Supported 00:23:53.214 Device Self-Test: Not Supported 00:23:53.214 Directives: Not Supported 00:23:53.214 NVMe-MI: Not Supported 00:23:53.214 Virtualization Management: Not Supported 00:23:53.214 Doorbell Buffer Config: Not Supported 00:23:53.214 Get LBA Status Capability: Not Supported 00:23:53.214 Command & Feature Lockdown Capability: Not Supported 00:23:53.214 Abort Command Limit: 1 00:23:53.214 Async Event Request Limit: 1 00:23:53.214 Number of Firmware Slots: N/A 00:23:53.214 Firmware Slot 1 Read-Only: N/A 00:23:53.214 Firmware Activation Without Reset: N/A 00:23:53.214 Multiple Update Detection Support: N/A 00:23:53.214 Firmware Update Granularity: No Information Provided 00:23:53.214 Per-Namespace SMART Log: No 00:23:53.214 Asymmetric Namespace Access Log Page: Not Supported 00:23:53.214 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:53.214 Command Effects Log Page: Not Supported 00:23:53.214 Get Log Page Extended Data: Supported 00:23:53.214 Telemetry Log Pages: Not Supported 00:23:53.214 Persistent Event Log Pages: Not Supported 00:23:53.214 Supported Log Pages Log Page: May Support 00:23:53.214 Commands Supported & Effects Log Page: Not Supported 00:23:53.214 Feature Identifiers & Effects Log Page:May Support 00:23:53.214 NVMe-MI Commands & Effects Log Page: May Support 00:23:53.214 Data Area 4 for Telemetry Log: Not Supported 00:23:53.214 Error Log Page Entries Supported: 1 00:23:53.214 Keep Alive: Not Supported 00:23:53.214 00:23:53.214 NVM Command Set Attributes 00:23:53.214 ========================== 00:23:53.214 Submission Queue Entry Size 00:23:53.214 Max: 1 00:23:53.214 Min: 1 00:23:53.214 Completion Queue Entry Size 00:23:53.214 Max: 1 00:23:53.214 Min: 1 00:23:53.214 Number of Namespaces: 0 00:23:53.214 Compare Command: Not Supported 00:23:53.214 Write Uncorrectable Command: Not Supported 00:23:53.214 Dataset Management Command: Not Supported 00:23:53.214 Write Zeroes Command: Not Supported 00:23:53.214 Set Features Save Field: Not Supported 00:23:53.214 Reservations: Not Supported 00:23:53.214 Timestamp: Not Supported 00:23:53.214 Copy: Not Supported 00:23:53.214 Volatile Write Cache: Not Present 00:23:53.214 Atomic Write Unit (Normal): 1 00:23:53.214 Atomic Write Unit (PFail): 1 00:23:53.214 Atomic Compare & Write Unit: 1 00:23:53.214 Fused Compare & Write: Not Supported 00:23:53.214 Scatter-Gather List 00:23:53.214 SGL Command Set: Supported 00:23:53.214 SGL Keyed: Not Supported 00:23:53.214 SGL Bit Bucket Descriptor: Not Supported 00:23:53.214 SGL Metadata Pointer: Not Supported 00:23:53.214 Oversized SGL: Not Supported 00:23:53.214 SGL Metadata Address: Not Supported 00:23:53.214 SGL Offset: Supported 00:23:53.214 Transport SGL Data Block: Not Supported 00:23:53.214 Replay Protected Memory Block: Not Supported 00:23:53.214 00:23:53.214 Firmware Slot Information 00:23:53.214 ========================= 00:23:53.214 Active slot: 0 00:23:53.214 00:23:53.214 00:23:53.214 Error Log 00:23:53.214 ========= 00:23:53.214 00:23:53.214 Active Namespaces 00:23:53.214 ================= 00:23:53.214 Discovery Log Page 00:23:53.214 ================== 00:23:53.214 Generation Counter: 2 00:23:53.214 Number of Records: 2 00:23:53.214 Record Format: 0 00:23:53.214 00:23:53.214 Discovery Log Entry 0 00:23:53.214 ---------------------- 00:23:53.214 Transport Type: 3 (TCP) 00:23:53.214 Address Family: 1 (IPv4) 00:23:53.214 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:53.214 Entry Flags: 00:23:53.214 Duplicate Returned Information: 0 00:23:53.214 Explicit Persistent Connection Support for Discovery: 0 00:23:53.214 Transport Requirements: 00:23:53.214 Secure Channel: Not Specified 00:23:53.214 Port ID: 1 (0x0001) 00:23:53.214 Controller ID: 65535 (0xffff) 00:23:53.214 Admin Max SQ Size: 32 00:23:53.214 Transport Service Identifier: 4420 00:23:53.214 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:53.214 Transport Address: 10.0.0.1 00:23:53.214 Discovery Log Entry 1 00:23:53.214 ---------------------- 00:23:53.214 Transport Type: 3 (TCP) 00:23:53.214 Address Family: 1 (IPv4) 00:23:53.214 Subsystem Type: 2 (NVM Subsystem) 00:23:53.214 Entry Flags: 00:23:53.214 Duplicate Returned Information: 0 00:23:53.214 Explicit Persistent Connection Support for Discovery: 0 00:23:53.214 Transport Requirements: 00:23:53.214 Secure Channel: Not Specified 00:23:53.214 Port ID: 1 (0x0001) 00:23:53.214 Controller ID: 65535 (0xffff) 00:23:53.214 Admin Max SQ Size: 32 00:23:53.214 Transport Service Identifier: 4420 00:23:53.214 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:53.214 Transport Address: 10.0.0.1 00:23:53.215 20:55:17 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:53.215 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.215 get_feature(0x01) failed 00:23:53.215 get_feature(0x02) failed 00:23:53.215 get_feature(0x04) failed 00:23:53.215 ===================================================== 00:23:53.215 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:53.215 ===================================================== 00:23:53.215 Controller Capabilities/Features 00:23:53.215 ================================ 00:23:53.215 Vendor ID: 0000 00:23:53.215 Subsystem Vendor ID: 0000 00:23:53.215 Serial Number: a3d9ad147caf519ecf3a 00:23:53.215 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:53.215 Firmware Version: 6.7.0-68 00:23:53.215 Recommended Arb Burst: 6 00:23:53.215 IEEE OUI Identifier: 00 00 00 00:23:53.215 Multi-path I/O 00:23:53.215 May have multiple subsystem ports: Yes 00:23:53.215 May have multiple controllers: Yes 00:23:53.215 Associated with SR-IOV VF: No 00:23:53.215 Max Data Transfer Size: Unlimited 00:23:53.215 Max Number of Namespaces: 1024 00:23:53.215 Max Number of I/O Queues: 128 00:23:53.215 NVMe Specification Version (VS): 1.3 00:23:53.215 NVMe Specification Version (Identify): 1.3 00:23:53.215 Maximum Queue Entries: 1024 00:23:53.215 Contiguous Queues Required: No 00:23:53.215 Arbitration Mechanisms Supported 00:23:53.215 Weighted Round Robin: Not Supported 00:23:53.215 Vendor Specific: Not Supported 00:23:53.215 Reset Timeout: 7500 ms 00:23:53.215 Doorbell Stride: 4 bytes 00:23:53.215 NVM Subsystem Reset: Not Supported 00:23:53.215 Command Sets Supported 00:23:53.215 NVM Command Set: Supported 00:23:53.215 Boot Partition: Not Supported 00:23:53.215 Memory Page Size Minimum: 4096 bytes 00:23:53.215 Memory Page Size Maximum: 4096 bytes 00:23:53.215 Persistent Memory Region: Not Supported 00:23:53.215 Optional Asynchronous Events Supported 00:23:53.215 Namespace Attribute Notices: Supported 00:23:53.215 Firmware Activation Notices: Not Supported 00:23:53.215 ANA Change Notices: Supported 00:23:53.215 PLE Aggregate Log Change Notices: Not Supported 00:23:53.215 LBA Status Info Alert Notices: Not Supported 00:23:53.215 EGE Aggregate Log Change Notices: Not Supported 00:23:53.215 Normal NVM Subsystem Shutdown event: Not Supported 00:23:53.215 Zone Descriptor Change Notices: Not Supported 00:23:53.215 Discovery Log Change Notices: Not Supported 00:23:53.215 Controller Attributes 00:23:53.215 128-bit Host Identifier: Supported 00:23:53.215 Non-Operational Permissive Mode: Not Supported 00:23:53.215 NVM Sets: Not Supported 00:23:53.215 Read Recovery Levels: Not Supported 00:23:53.215 Endurance Groups: Not Supported 00:23:53.215 Predictable Latency Mode: Not Supported 00:23:53.215 Traffic Based Keep ALive: Supported 00:23:53.215 Namespace Granularity: Not Supported 00:23:53.215 SQ Associations: Not Supported 00:23:53.215 UUID List: Not Supported 00:23:53.215 Multi-Domain Subsystem: Not Supported 00:23:53.215 Fixed Capacity Management: Not Supported 00:23:53.215 Variable Capacity Management: Not Supported 00:23:53.215 Delete Endurance Group: Not Supported 00:23:53.215 Delete NVM Set: Not Supported 00:23:53.215 Extended LBA Formats Supported: Not Supported 00:23:53.215 Flexible Data Placement Supported: Not Supported 00:23:53.215 00:23:53.215 Controller Memory Buffer Support 00:23:53.215 ================================ 00:23:53.215 Supported: No 00:23:53.215 00:23:53.215 Persistent Memory Region Support 00:23:53.215 ================================ 00:23:53.215 Supported: No 00:23:53.215 00:23:53.215 Admin Command Set Attributes 00:23:53.215 ============================ 00:23:53.215 Security Send/Receive: Not Supported 00:23:53.215 Format NVM: Not Supported 00:23:53.215 Firmware Activate/Download: Not Supported 00:23:53.215 Namespace Management: Not Supported 00:23:53.215 Device Self-Test: Not Supported 00:23:53.215 Directives: Not Supported 00:23:53.215 NVMe-MI: Not Supported 00:23:53.215 Virtualization Management: Not Supported 00:23:53.215 Doorbell Buffer Config: Not Supported 00:23:53.215 Get LBA Status Capability: Not Supported 00:23:53.215 Command & Feature Lockdown Capability: Not Supported 00:23:53.215 Abort Command Limit: 4 00:23:53.215 Async Event Request Limit: 4 00:23:53.215 Number of Firmware Slots: N/A 00:23:53.215 Firmware Slot 1 Read-Only: N/A 00:23:53.215 Firmware Activation Without Reset: N/A 00:23:53.215 Multiple Update Detection Support: N/A 00:23:53.215 Firmware Update Granularity: No Information Provided 00:23:53.215 Per-Namespace SMART Log: Yes 00:23:53.215 Asymmetric Namespace Access Log Page: Supported 00:23:53.215 ANA Transition Time : 10 sec 00:23:53.215 00:23:53.215 Asymmetric Namespace Access Capabilities 00:23:53.215 ANA Optimized State : Supported 00:23:53.215 ANA Non-Optimized State : Supported 00:23:53.215 ANA Inaccessible State : Supported 00:23:53.215 ANA Persistent Loss State : Supported 00:23:53.215 ANA Change State : Supported 00:23:53.215 ANAGRPID is not changed : No 00:23:53.215 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:53.215 00:23:53.215 ANA Group Identifier Maximum : 128 00:23:53.215 Number of ANA Group Identifiers : 128 00:23:53.215 Max Number of Allowed Namespaces : 1024 00:23:53.215 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:53.215 Command Effects Log Page: Supported 00:23:53.215 Get Log Page Extended Data: Supported 00:23:53.215 Telemetry Log Pages: Not Supported 00:23:53.215 Persistent Event Log Pages: Not Supported 00:23:53.215 Supported Log Pages Log Page: May Support 00:23:53.215 Commands Supported & Effects Log Page: Not Supported 00:23:53.215 Feature Identifiers & Effects Log Page:May Support 00:23:53.215 NVMe-MI Commands & Effects Log Page: May Support 00:23:53.215 Data Area 4 for Telemetry Log: Not Supported 00:23:53.215 Error Log Page Entries Supported: 128 00:23:53.215 Keep Alive: Supported 00:23:53.215 Keep Alive Granularity: 1000 ms 00:23:53.215 00:23:53.215 NVM Command Set Attributes 00:23:53.215 ========================== 00:23:53.215 Submission Queue Entry Size 00:23:53.215 Max: 64 00:23:53.215 Min: 64 00:23:53.215 Completion Queue Entry Size 00:23:53.215 Max: 16 00:23:53.215 Min: 16 00:23:53.215 Number of Namespaces: 1024 00:23:53.215 Compare Command: Not Supported 00:23:53.215 Write Uncorrectable Command: Not Supported 00:23:53.215 Dataset Management Command: Supported 00:23:53.215 Write Zeroes Command: Supported 00:23:53.215 Set Features Save Field: Not Supported 00:23:53.215 Reservations: Not Supported 00:23:53.215 Timestamp: Not Supported 00:23:53.215 Copy: Not Supported 00:23:53.215 Volatile Write Cache: Present 00:23:53.215 Atomic Write Unit (Normal): 1 00:23:53.215 Atomic Write Unit (PFail): 1 00:23:53.215 Atomic Compare & Write Unit: 1 00:23:53.215 Fused Compare & Write: Not Supported 00:23:53.215 Scatter-Gather List 00:23:53.215 SGL Command Set: Supported 00:23:53.215 SGL Keyed: Not Supported 00:23:53.215 SGL Bit Bucket Descriptor: Not Supported 00:23:53.215 SGL Metadata Pointer: Not Supported 00:23:53.215 Oversized SGL: Not Supported 00:23:53.215 SGL Metadata Address: Not Supported 00:23:53.215 SGL Offset: Supported 00:23:53.215 Transport SGL Data Block: Not Supported 00:23:53.215 Replay Protected Memory Block: Not Supported 00:23:53.215 00:23:53.215 Firmware Slot Information 00:23:53.215 ========================= 00:23:53.215 Active slot: 0 00:23:53.215 00:23:53.215 Asymmetric Namespace Access 00:23:53.215 =========================== 00:23:53.215 Change Count : 0 00:23:53.215 Number of ANA Group Descriptors : 1 00:23:53.215 ANA Group Descriptor : 0 00:23:53.215 ANA Group ID : 1 00:23:53.215 Number of NSID Values : 1 00:23:53.215 Change Count : 0 00:23:53.215 ANA State : 1 00:23:53.215 Namespace Identifier : 1 00:23:53.215 00:23:53.215 Commands Supported and Effects 00:23:53.215 ============================== 00:23:53.215 Admin Commands 00:23:53.215 -------------- 00:23:53.215 Get Log Page (02h): Supported 00:23:53.215 Identify (06h): Supported 00:23:53.215 Abort (08h): Supported 00:23:53.215 Set Features (09h): Supported 00:23:53.215 Get Features (0Ah): Supported 00:23:53.215 Asynchronous Event Request (0Ch): Supported 00:23:53.215 Keep Alive (18h): Supported 00:23:53.215 I/O Commands 00:23:53.215 ------------ 00:23:53.215 Flush (00h): Supported 00:23:53.215 Write (01h): Supported LBA-Change 00:23:53.215 Read (02h): Supported 00:23:53.215 Write Zeroes (08h): Supported LBA-Change 00:23:53.215 Dataset Management (09h): Supported 00:23:53.215 00:23:53.215 Error Log 00:23:53.215 ========= 00:23:53.215 Entry: 0 00:23:53.215 Error Count: 0x3 00:23:53.215 Submission Queue Id: 0x0 00:23:53.215 Command Id: 0x5 00:23:53.216 Phase Bit: 0 00:23:53.216 Status Code: 0x2 00:23:53.216 Status Code Type: 0x0 00:23:53.216 Do Not Retry: 1 00:23:53.216 Error Location: 0x28 00:23:53.216 LBA: 0x0 00:23:53.216 Namespace: 0x0 00:23:53.216 Vendor Log Page: 0x0 00:23:53.216 ----------- 00:23:53.216 Entry: 1 00:23:53.216 Error Count: 0x2 00:23:53.216 Submission Queue Id: 0x0 00:23:53.216 Command Id: 0x5 00:23:53.216 Phase Bit: 0 00:23:53.216 Status Code: 0x2 00:23:53.216 Status Code Type: 0x0 00:23:53.216 Do Not Retry: 1 00:23:53.216 Error Location: 0x28 00:23:53.216 LBA: 0x0 00:23:53.216 Namespace: 0x0 00:23:53.216 Vendor Log Page: 0x0 00:23:53.216 ----------- 00:23:53.216 Entry: 2 00:23:53.216 Error Count: 0x1 00:23:53.216 Submission Queue Id: 0x0 00:23:53.216 Command Id: 0x4 00:23:53.216 Phase Bit: 0 00:23:53.216 Status Code: 0x2 00:23:53.216 Status Code Type: 0x0 00:23:53.216 Do Not Retry: 1 00:23:53.216 Error Location: 0x28 00:23:53.216 LBA: 0x0 00:23:53.216 Namespace: 0x0 00:23:53.216 Vendor Log Page: 0x0 00:23:53.216 00:23:53.216 Number of Queues 00:23:53.216 ================ 00:23:53.216 Number of I/O Submission Queues: 128 00:23:53.216 Number of I/O Completion Queues: 128 00:23:53.216 00:23:53.216 ZNS Specific Controller Data 00:23:53.216 ============================ 00:23:53.216 Zone Append Size Limit: 0 00:23:53.216 00:23:53.216 00:23:53.216 Active Namespaces 00:23:53.216 ================= 00:23:53.216 get_feature(0x05) failed 00:23:53.216 Namespace ID:1 00:23:53.216 Command Set Identifier: NVM (00h) 00:23:53.216 Deallocate: Supported 00:23:53.216 Deallocated/Unwritten Error: Not Supported 00:23:53.216 Deallocated Read Value: Unknown 00:23:53.216 Deallocate in Write Zeroes: Not Supported 00:23:53.216 Deallocated Guard Field: 0xFFFF 00:23:53.216 Flush: Supported 00:23:53.216 Reservation: Not Supported 00:23:53.216 Namespace Sharing Capabilities: Multiple Controllers 00:23:53.216 Size (in LBAs): 3750748848 (1788GiB) 00:23:53.216 Capacity (in LBAs): 3750748848 (1788GiB) 00:23:53.216 Utilization (in LBAs): 3750748848 (1788GiB) 00:23:53.216 UUID: 35fc5e4a-9050-47fe-b45a-22d4d6100f72 00:23:53.216 Thin Provisioning: Not Supported 00:23:53.216 Per-NS Atomic Units: Yes 00:23:53.216 Atomic Write Unit (Normal): 8 00:23:53.216 Atomic Write Unit (PFail): 8 00:23:53.216 Preferred Write Granularity: 8 00:23:53.216 Atomic Compare & Write Unit: 8 00:23:53.216 Atomic Boundary Size (Normal): 0 00:23:53.216 Atomic Boundary Size (PFail): 0 00:23:53.216 Atomic Boundary Offset: 0 00:23:53.216 NGUID/EUI64 Never Reused: No 00:23:53.216 ANA group ID: 1 00:23:53.216 Namespace Write Protected: No 00:23:53.216 Number of LBA Formats: 1 00:23:53.216 Current LBA Format: LBA Format #00 00:23:53.216 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:53.216 00:23:53.216 20:55:17 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:53.216 20:55:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:53.216 20:55:17 -- nvmf/common.sh@117 -- # sync 00:23:53.216 20:55:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:53.216 20:55:17 -- nvmf/common.sh@120 -- # set +e 00:23:53.216 20:55:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:53.216 20:55:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:53.216 rmmod nvme_tcp 00:23:53.216 rmmod nvme_fabrics 00:23:53.216 20:55:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:53.216 20:55:17 -- nvmf/common.sh@124 -- # set -e 00:23:53.216 20:55:17 -- nvmf/common.sh@125 -- # return 0 00:23:53.216 20:55:17 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:23:53.216 20:55:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:53.216 20:55:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:53.216 20:55:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:53.216 20:55:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.216 20:55:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.216 20:55:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.216 20:55:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.216 20:55:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.758 20:55:19 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:55.758 20:55:19 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:55.758 20:55:19 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:55.758 20:55:19 -- nvmf/common.sh@675 -- # echo 0 00:23:55.758 20:55:19 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:55.758 20:55:19 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:55.758 20:55:19 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:55.758 20:55:19 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:55.758 20:55:19 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:55.758 20:55:19 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:55.758 20:55:19 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:59.057 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:23:59.057 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:23:59.316 00:23:59.316 real 0m18.828s 00:23:59.316 user 0m5.104s 00:23:59.316 sys 0m10.621s 00:23:59.316 20:55:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:59.316 20:55:23 -- common/autotest_common.sh@10 -- # set +x 00:23:59.316 ************************************ 00:23:59.316 END TEST nvmf_identify_kernel_target 00:23:59.316 ************************************ 00:23:59.316 20:55:23 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:59.316 20:55:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:59.316 20:55:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:59.316 20:55:23 -- common/autotest_common.sh@10 -- # set +x 00:23:59.577 ************************************ 00:23:59.577 START TEST nvmf_auth 00:23:59.577 ************************************ 00:23:59.577 20:55:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:59.577 * Looking for test storage... 00:23:59.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:59.577 20:55:24 -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.577 20:55:24 -- nvmf/common.sh@7 -- # uname -s 00:23:59.577 20:55:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.577 20:55:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.577 20:55:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.577 20:55:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.577 20:55:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.577 20:55:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.577 20:55:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.577 20:55:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.577 20:55:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.577 20:55:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.577 20:55:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:59.577 20:55:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:23:59.577 20:55:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.577 20:55:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.577 20:55:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.577 20:55:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.577 20:55:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.577 20:55:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.577 20:55:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.577 20:55:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.577 20:55:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.577 20:55:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.577 20:55:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.577 20:55:24 -- paths/export.sh@5 -- # export PATH 00:23:59.578 20:55:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.578 20:55:24 -- nvmf/common.sh@47 -- # : 0 00:23:59.578 20:55:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:59.578 20:55:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:59.578 20:55:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.578 20:55:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.578 20:55:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.578 20:55:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:59.578 20:55:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:59.578 20:55:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:59.578 20:55:24 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:59.578 20:55:24 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:59.578 20:55:24 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:59.578 20:55:24 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:59.578 20:55:24 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:59.578 20:55:24 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:59.578 20:55:24 -- host/auth.sh@21 -- # keys=() 00:23:59.578 20:55:24 -- host/auth.sh@77 -- # nvmftestinit 00:23:59.578 20:55:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:59.578 20:55:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.578 20:55:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:59.578 20:55:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:59.578 20:55:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:59.578 20:55:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.578 20:55:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.578 20:55:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.578 20:55:24 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:23:59.578 20:55:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:59.578 20:55:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.578 20:55:24 -- common/autotest_common.sh@10 -- # set +x 00:24:07.718 20:55:30 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:07.718 20:55:30 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.718 20:55:30 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.718 20:55:30 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.718 20:55:30 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.718 20:55:30 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.718 20:55:30 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.718 20:55:30 -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.718 20:55:30 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.718 20:55:30 -- nvmf/common.sh@296 -- # e810=() 00:24:07.718 20:55:30 -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.718 20:55:30 -- nvmf/common.sh@297 -- # x722=() 00:24:07.718 20:55:30 -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.718 20:55:30 -- nvmf/common.sh@298 -- # mlx=() 00:24:07.718 20:55:30 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.718 20:55:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.718 20:55:30 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.718 20:55:30 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.718 20:55:30 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.718 20:55:30 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.718 20:55:30 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.718 20:55:30 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.718 20:55:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.718 20:55:30 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.718 20:55:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.718 20:55:30 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.718 20:55:30 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.718 20:55:30 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.718 20:55:30 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.718 20:55:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.718 20:55:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:07.718 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:07.718 20:55:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.718 20:55:30 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:07.718 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:07.718 20:55:30 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.718 20:55:30 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.718 20:55:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.718 20:55:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:07.718 20:55:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.718 20:55:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:07.718 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:07.718 20:55:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.718 20:55:30 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.718 20:55:30 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.718 20:55:30 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:07.718 20:55:30 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.718 20:55:30 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:07.718 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:07.718 20:55:30 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.718 20:55:30 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:07.718 20:55:30 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:07.718 20:55:30 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:07.718 20:55:30 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:07.718 20:55:30 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.718 20:55:30 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.718 20:55:30 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.718 20:55:30 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.718 20:55:30 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.718 20:55:30 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.718 20:55:30 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.718 20:55:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.718 20:55:30 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.718 20:55:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.718 20:55:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.718 20:55:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.718 20:55:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.718 20:55:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.718 20:55:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.718 20:55:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:07.718 20:55:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.718 20:55:31 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.718 20:55:31 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.718 20:55:31 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:24:07.718 00:24:07.718 --- 10.0.0.2 ping statistics --- 00:24:07.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.718 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:24:07.718 20:55:31 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:24:07.718 00:24:07.718 --- 10.0.0.1 ping statistics --- 00:24:07.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.718 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:24:07.718 20:55:31 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.718 20:55:31 -- nvmf/common.sh@411 -- # return 0 00:24:07.718 20:55:31 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:07.718 20:55:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.718 20:55:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:07.718 20:55:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:07.718 20:55:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.718 20:55:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:07.718 20:55:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:07.718 20:55:31 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:24:07.718 20:55:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:07.718 20:55:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:07.718 20:55:31 -- common/autotest_common.sh@10 -- # set +x 00:24:07.718 20:55:31 -- nvmf/common.sh@470 -- # nvmfpid=2900865 00:24:07.718 20:55:31 -- nvmf/common.sh@471 -- # waitforlisten 2900865 00:24:07.718 20:55:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:07.718 20:55:31 -- common/autotest_common.sh@817 -- # '[' -z 2900865 ']' 00:24:07.718 20:55:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.718 20:55:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:07.718 20:55:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.718 20:55:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:07.718 20:55:31 -- common/autotest_common.sh@10 -- # set +x 00:24:07.718 20:55:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:07.718 20:55:32 -- common/autotest_common.sh@850 -- # return 0 00:24:07.718 20:55:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:07.718 20:55:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:07.718 20:55:32 -- common/autotest_common.sh@10 -- # set +x 00:24:07.718 20:55:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.718 20:55:32 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:07.718 20:55:32 -- host/auth.sh@81 -- # gen_key null 32 00:24:07.718 20:55:32 -- host/auth.sh@53 -- # local digest len file key 00:24:07.718 20:55:32 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.718 20:55:32 -- host/auth.sh@54 -- # local -A digests 00:24:07.718 20:55:32 -- host/auth.sh@56 -- # digest=null 00:24:07.718 20:55:32 -- host/auth.sh@56 -- # len=32 00:24:07.718 20:55:32 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:07.718 20:55:32 -- host/auth.sh@57 -- # key=205d650ea6b583210d682721b73867ab 00:24:07.718 20:55:32 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:07.718 20:55:32 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.5kg 00:24:07.718 20:55:32 -- host/auth.sh@59 -- # format_dhchap_key 205d650ea6b583210d682721b73867ab 0 00:24:07.718 20:55:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 205d650ea6b583210d682721b73867ab 0 00:24:07.718 20:55:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:07.718 20:55:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:07.718 20:55:32 -- nvmf/common.sh@693 -- # key=205d650ea6b583210d682721b73867ab 00:24:07.718 20:55:32 -- nvmf/common.sh@693 -- # digest=0 00:24:07.718 20:55:32 -- nvmf/common.sh@694 -- # python - 00:24:07.980 20:55:32 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.5kg 00:24:07.980 20:55:32 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.5kg 00:24:07.980 20:55:32 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.5kg 00:24:07.980 20:55:32 -- host/auth.sh@82 -- # gen_key null 48 00:24:07.980 20:55:32 -- host/auth.sh@53 -- # local digest len file key 00:24:07.980 20:55:32 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.980 20:55:32 -- host/auth.sh@54 -- # local -A digests 00:24:07.980 20:55:32 -- host/auth.sh@56 -- # digest=null 00:24:07.980 20:55:32 -- host/auth.sh@56 -- # len=48 00:24:07.980 20:55:32 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:07.980 20:55:32 -- host/auth.sh@57 -- # key=a25f0b50241fd211c9b15153d589a66cf1853b2dc4ef3882 00:24:07.980 20:55:32 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:24:07.980 20:55:32 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.R2n 00:24:07.980 20:55:32 -- host/auth.sh@59 -- # format_dhchap_key a25f0b50241fd211c9b15153d589a66cf1853b2dc4ef3882 0 00:24:07.980 20:55:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 a25f0b50241fd211c9b15153d589a66cf1853b2dc4ef3882 0 00:24:07.980 20:55:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:07.980 20:55:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:07.980 20:55:32 -- nvmf/common.sh@693 -- # key=a25f0b50241fd211c9b15153d589a66cf1853b2dc4ef3882 00:24:07.980 20:55:32 -- nvmf/common.sh@693 -- # digest=0 00:24:07.980 20:55:32 -- nvmf/common.sh@694 -- # python - 00:24:07.980 20:55:32 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.R2n 00:24:07.980 20:55:32 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.R2n 00:24:07.980 20:55:32 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.R2n 00:24:07.980 20:55:32 -- host/auth.sh@83 -- # gen_key sha256 32 00:24:07.980 20:55:32 -- host/auth.sh@53 -- # local digest len file key 00:24:07.980 20:55:32 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.980 20:55:32 -- host/auth.sh@54 -- # local -A digests 00:24:07.980 20:55:32 -- host/auth.sh@56 -- # digest=sha256 00:24:07.980 20:55:32 -- host/auth.sh@56 -- # len=32 00:24:07.980 20:55:32 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:07.980 20:55:32 -- host/auth.sh@57 -- # key=4410c2a9776c6456b84d9eeddc4a8053 00:24:07.980 20:55:32 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:24:07.980 20:55:32 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.CeY 00:24:07.980 20:55:32 -- host/auth.sh@59 -- # format_dhchap_key 4410c2a9776c6456b84d9eeddc4a8053 1 00:24:07.980 20:55:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 4410c2a9776c6456b84d9eeddc4a8053 1 00:24:07.980 20:55:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:07.980 20:55:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:07.980 20:55:32 -- nvmf/common.sh@693 -- # key=4410c2a9776c6456b84d9eeddc4a8053 00:24:07.980 20:55:32 -- nvmf/common.sh@693 -- # digest=1 00:24:07.980 20:55:32 -- nvmf/common.sh@694 -- # python - 00:24:07.980 20:55:32 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.CeY 00:24:07.980 20:55:32 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.CeY 00:24:07.980 20:55:32 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.CeY 00:24:07.980 20:55:32 -- host/auth.sh@84 -- # gen_key sha384 48 00:24:07.980 20:55:32 -- host/auth.sh@53 -- # local digest len file key 00:24:07.980 20:55:32 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.980 20:55:32 -- host/auth.sh@54 -- # local -A digests 00:24:07.980 20:55:32 -- host/auth.sh@56 -- # digest=sha384 00:24:07.980 20:55:32 -- host/auth.sh@56 -- # len=48 00:24:07.980 20:55:32 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:07.980 20:55:32 -- host/auth.sh@57 -- # key=41034ec93c818d3eb8586c5e5111c274865e03648cffb9d4 00:24:07.980 20:55:32 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:24:07.980 20:55:32 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.RFS 00:24:07.980 20:55:32 -- host/auth.sh@59 -- # format_dhchap_key 41034ec93c818d3eb8586c5e5111c274865e03648cffb9d4 2 00:24:07.980 20:55:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 41034ec93c818d3eb8586c5e5111c274865e03648cffb9d4 2 00:24:07.980 20:55:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:07.980 20:55:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:07.980 20:55:32 -- nvmf/common.sh@693 -- # key=41034ec93c818d3eb8586c5e5111c274865e03648cffb9d4 00:24:07.980 20:55:32 -- nvmf/common.sh@693 -- # digest=2 00:24:07.980 20:55:32 -- nvmf/common.sh@694 -- # python - 00:24:07.980 20:55:32 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.RFS 00:24:07.980 20:55:32 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.RFS 00:24:07.980 20:55:32 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.RFS 00:24:07.980 20:55:32 -- host/auth.sh@85 -- # gen_key sha512 64 00:24:07.980 20:55:32 -- host/auth.sh@53 -- # local digest len file key 00:24:07.980 20:55:32 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:07.980 20:55:32 -- host/auth.sh@54 -- # local -A digests 00:24:07.980 20:55:32 -- host/auth.sh@56 -- # digest=sha512 00:24:07.980 20:55:32 -- host/auth.sh@56 -- # len=64 00:24:07.980 20:55:32 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:07.980 20:55:32 -- host/auth.sh@57 -- # key=c9c3776a71c9016fb2ff719a718f23f3b0eb1a2e9bb41250d6221ac8ee4e0edb 00:24:07.980 20:55:32 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:24:07.980 20:55:32 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.zzU 00:24:07.980 20:55:32 -- host/auth.sh@59 -- # format_dhchap_key c9c3776a71c9016fb2ff719a718f23f3b0eb1a2e9bb41250d6221ac8ee4e0edb 3 00:24:07.980 20:55:32 -- nvmf/common.sh@708 -- # format_key DHHC-1 c9c3776a71c9016fb2ff719a718f23f3b0eb1a2e9bb41250d6221ac8ee4e0edb 3 00:24:08.241 20:55:32 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:08.241 20:55:32 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:24:08.241 20:55:32 -- nvmf/common.sh@693 -- # key=c9c3776a71c9016fb2ff719a718f23f3b0eb1a2e9bb41250d6221ac8ee4e0edb 00:24:08.241 20:55:32 -- nvmf/common.sh@693 -- # digest=3 00:24:08.241 20:55:32 -- nvmf/common.sh@694 -- # python - 00:24:08.241 20:55:32 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.zzU 00:24:08.241 20:55:32 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.zzU 00:24:08.241 20:55:32 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.zzU 00:24:08.241 20:55:32 -- host/auth.sh@87 -- # waitforlisten 2900865 00:24:08.241 20:55:32 -- common/autotest_common.sh@817 -- # '[' -z 2900865 ']' 00:24:08.241 20:55:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:08.241 20:55:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:08.241 20:55:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:08.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:08.241 20:55:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:08.241 20:55:32 -- common/autotest_common.sh@10 -- # set +x 00:24:08.502 20:55:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:08.502 20:55:32 -- common/autotest_common.sh@850 -- # return 0 00:24:08.502 20:55:32 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:08.502 20:55:32 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5kg 00:24:08.502 20:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.502 20:55:32 -- common/autotest_common.sh@10 -- # set +x 00:24:08.502 20:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.502 20:55:32 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:08.502 20:55:32 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.R2n 00:24:08.502 20:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.502 20:55:32 -- common/autotest_common.sh@10 -- # set +x 00:24:08.502 20:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.502 20:55:32 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:08.502 20:55:32 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.CeY 00:24:08.502 20:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.502 20:55:32 -- common/autotest_common.sh@10 -- # set +x 00:24:08.502 20:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.502 20:55:32 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:08.502 20:55:32 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.RFS 00:24:08.502 20:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.502 20:55:32 -- common/autotest_common.sh@10 -- # set +x 00:24:08.502 20:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.502 20:55:32 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:24:08.502 20:55:32 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.zzU 00:24:08.502 20:55:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.502 20:55:32 -- common/autotest_common.sh@10 -- # set +x 00:24:08.502 20:55:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.503 20:55:32 -- host/auth.sh@92 -- # nvmet_auth_init 00:24:08.503 20:55:32 -- host/auth.sh@35 -- # get_main_ns_ip 00:24:08.503 20:55:32 -- nvmf/common.sh@717 -- # local ip 00:24:08.503 20:55:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.503 20:55:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.503 20:55:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.503 20:55:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.503 20:55:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:08.503 20:55:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.503 20:55:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:08.503 20:55:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:08.503 20:55:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:08.503 20:55:32 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:08.503 20:55:32 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:08.503 20:55:32 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:24:08.503 20:55:32 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:08.503 20:55:32 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:08.503 20:55:32 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:08.503 20:55:32 -- nvmf/common.sh@628 -- # local block nvme 00:24:08.503 20:55:32 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:24:08.503 20:55:32 -- nvmf/common.sh@631 -- # modprobe nvmet 00:24:08.503 20:55:32 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:08.503 20:55:32 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:11.805 Waiting for block devices as requested 00:24:11.805 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:11.805 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:11.805 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:11.805 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:11.805 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:12.065 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:12.065 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:12.065 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:12.327 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:24:12.327 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:24:12.327 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:24:12.587 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:24:12.587 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:24:12.587 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:24:12.849 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:24:12.849 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:24:12.849 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:24:13.792 20:55:38 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:24:13.792 20:55:38 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:13.792 20:55:38 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:24:13.792 20:55:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:13.792 20:55:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:13.792 20:55:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:13.792 20:55:38 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:24:13.792 20:55:38 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:24:13.792 20:55:38 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:13.792 No valid GPT data, bailing 00:24:13.792 20:55:38 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:13.792 20:55:38 -- scripts/common.sh@391 -- # pt= 00:24:13.792 20:55:38 -- scripts/common.sh@392 -- # return 1 00:24:13.792 20:55:38 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:24:13.792 20:55:38 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:24:13.792 20:55:38 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:13.792 20:55:38 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:13.792 20:55:38 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:13.792 20:55:38 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:13.792 20:55:38 -- nvmf/common.sh@656 -- # echo 1 00:24:13.792 20:55:38 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:24:13.792 20:55:38 -- nvmf/common.sh@658 -- # echo 1 00:24:13.792 20:55:38 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:24:13.792 20:55:38 -- nvmf/common.sh@661 -- # echo tcp 00:24:13.792 20:55:38 -- nvmf/common.sh@662 -- # echo 4420 00:24:13.792 20:55:38 -- nvmf/common.sh@663 -- # echo ipv4 00:24:13.792 20:55:38 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:13.792 20:55:38 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.1 -t tcp -s 4420 00:24:14.053 00:24:14.053 Discovery Log Number of Records 2, Generation counter 2 00:24:14.053 =====Discovery Log Entry 0====== 00:24:14.053 trtype: tcp 00:24:14.053 adrfam: ipv4 00:24:14.053 subtype: current discovery subsystem 00:24:14.053 treq: not specified, sq flow control disable supported 00:24:14.053 portid: 1 00:24:14.053 trsvcid: 4420 00:24:14.053 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:14.053 traddr: 10.0.0.1 00:24:14.053 eflags: none 00:24:14.053 sectype: none 00:24:14.053 =====Discovery Log Entry 1====== 00:24:14.053 trtype: tcp 00:24:14.053 adrfam: ipv4 00:24:14.053 subtype: nvme subsystem 00:24:14.053 treq: not specified, sq flow control disable supported 00:24:14.053 portid: 1 00:24:14.053 trsvcid: 4420 00:24:14.053 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:14.053 traddr: 10.0.0.1 00:24:14.053 eflags: none 00:24:14.053 sectype: none 00:24:14.053 20:55:38 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:14.053 20:55:38 -- host/auth.sh@37 -- # echo 0 00:24:14.053 20:55:38 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:14.053 20:55:38 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:14.053 20:55:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.053 20:55:38 -- host/auth.sh@44 -- # digest=sha256 00:24:14.053 20:55:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.053 20:55:38 -- host/auth.sh@44 -- # keyid=1 00:24:14.053 20:55:38 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:14.053 20:55:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.053 20:55:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:14.053 20:55:38 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:14.053 20:55:38 -- host/auth.sh@100 -- # IFS=, 00:24:14.053 20:55:38 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:24:14.053 20:55:38 -- host/auth.sh@100 -- # IFS=, 00:24:14.053 20:55:38 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:14.053 20:55:38 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:14.053 20:55:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.053 20:55:38 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:24:14.053 20:55:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:14.053 20:55:38 -- host/auth.sh@68 -- # keyid=1 00:24:14.053 20:55:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:14.053 20:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.053 20:55:38 -- common/autotest_common.sh@10 -- # set +x 00:24:14.053 20:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.053 20:55:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.053 20:55:38 -- nvmf/common.sh@717 -- # local ip 00:24:14.053 20:55:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.053 20:55:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.053 20:55:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.053 20:55:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.053 20:55:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.053 20:55:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.053 20:55:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.053 20:55:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.053 20:55:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.053 20:55:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:14.053 20:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.053 20:55:38 -- common/autotest_common.sh@10 -- # set +x 00:24:14.053 nvme0n1 00:24:14.053 20:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.053 20:55:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.053 20:55:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.053 20:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.053 20:55:38 -- common/autotest_common.sh@10 -- # set +x 00:24:14.053 20:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.053 20:55:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.053 20:55:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.053 20:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.053 20:55:38 -- common/autotest_common.sh@10 -- # set +x 00:24:14.053 20:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.053 20:55:38 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:14.053 20:55:38 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.053 20:55:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.053 20:55:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:14.053 20:55:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.053 20:55:38 -- host/auth.sh@44 -- # digest=sha256 00:24:14.053 20:55:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.053 20:55:38 -- host/auth.sh@44 -- # keyid=0 00:24:14.053 20:55:38 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:14.053 20:55:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.053 20:55:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:14.053 20:55:38 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:14.053 20:55:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:24:14.053 20:55:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.053 20:55:38 -- host/auth.sh@68 -- # digest=sha256 00:24:14.053 20:55:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:14.053 20:55:38 -- host/auth.sh@68 -- # keyid=0 00:24:14.053 20:55:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:14.053 20:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.053 20:55:38 -- common/autotest_common.sh@10 -- # set +x 00:24:14.314 20:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.314 20:55:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.314 20:55:38 -- nvmf/common.sh@717 -- # local ip 00:24:14.314 20:55:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.314 20:55:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.314 20:55:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.314 20:55:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.314 20:55:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.314 20:55:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.314 20:55:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.314 20:55:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.314 20:55:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.314 20:55:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:14.314 20:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.314 20:55:38 -- common/autotest_common.sh@10 -- # set +x 00:24:14.314 nvme0n1 00:24:14.314 20:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.314 20:55:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.314 20:55:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.314 20:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.314 20:55:38 -- common/autotest_common.sh@10 -- # set +x 00:24:14.314 20:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.314 20:55:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.314 20:55:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.314 20:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.314 20:55:38 -- common/autotest_common.sh@10 -- # set +x 00:24:14.314 20:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.314 20:55:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.314 20:55:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:14.314 20:55:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.314 20:55:38 -- host/auth.sh@44 -- # digest=sha256 00:24:14.314 20:55:38 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.314 20:55:38 -- host/auth.sh@44 -- # keyid=1 00:24:14.314 20:55:38 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:14.314 20:55:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.314 20:55:38 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:14.314 20:55:38 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:14.314 20:55:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:24:14.314 20:55:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.314 20:55:38 -- host/auth.sh@68 -- # digest=sha256 00:24:14.314 20:55:38 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:14.314 20:55:38 -- host/auth.sh@68 -- # keyid=1 00:24:14.314 20:55:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:14.314 20:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.314 20:55:38 -- common/autotest_common.sh@10 -- # set +x 00:24:14.314 20:55:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.314 20:55:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.314 20:55:38 -- nvmf/common.sh@717 -- # local ip 00:24:14.314 20:55:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.314 20:55:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.314 20:55:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.314 20:55:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.314 20:55:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.314 20:55:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.314 20:55:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.314 20:55:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.314 20:55:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.314 20:55:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:14.314 20:55:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.314 20:55:38 -- common/autotest_common.sh@10 -- # set +x 00:24:14.575 nvme0n1 00:24:14.575 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.575 20:55:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.575 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.575 20:55:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.575 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:14.575 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.575 20:55:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.575 20:55:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.575 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.575 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:14.575 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.575 20:55:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.575 20:55:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:14.575 20:55:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.575 20:55:39 -- host/auth.sh@44 -- # digest=sha256 00:24:14.575 20:55:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.575 20:55:39 -- host/auth.sh@44 -- # keyid=2 00:24:14.575 20:55:39 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:14.575 20:55:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.575 20:55:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:14.575 20:55:39 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:14.575 20:55:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:24:14.575 20:55:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.575 20:55:39 -- host/auth.sh@68 -- # digest=sha256 00:24:14.575 20:55:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:14.575 20:55:39 -- host/auth.sh@68 -- # keyid=2 00:24:14.575 20:55:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:14.575 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.575 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:14.575 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.575 20:55:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.575 20:55:39 -- nvmf/common.sh@717 -- # local ip 00:24:14.575 20:55:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.575 20:55:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.575 20:55:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.575 20:55:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.575 20:55:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.575 20:55:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.575 20:55:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.575 20:55:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.575 20:55:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.575 20:55:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:14.575 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.575 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:14.836 nvme0n1 00:24:14.836 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.836 20:55:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.836 20:55:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.836 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.836 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:14.836 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.836 20:55:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.836 20:55:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.836 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.836 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:14.836 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.836 20:55:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:14.836 20:55:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:14.836 20:55:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.836 20:55:39 -- host/auth.sh@44 -- # digest=sha256 00:24:14.836 20:55:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.836 20:55:39 -- host/auth.sh@44 -- # keyid=3 00:24:14.836 20:55:39 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:14.836 20:55:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.836 20:55:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:14.836 20:55:39 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:14.836 20:55:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:24:14.836 20:55:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:14.836 20:55:39 -- host/auth.sh@68 -- # digest=sha256 00:24:14.836 20:55:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:14.836 20:55:39 -- host/auth.sh@68 -- # keyid=3 00:24:14.836 20:55:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:14.836 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.836 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:14.836 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.836 20:55:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:14.836 20:55:39 -- nvmf/common.sh@717 -- # local ip 00:24:14.836 20:55:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.836 20:55:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.836 20:55:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.836 20:55:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.836 20:55:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.836 20:55:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.836 20:55:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.836 20:55:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.836 20:55:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.836 20:55:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:14.836 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.836 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:15.097 nvme0n1 00:24:15.097 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.097 20:55:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.097 20:55:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.097 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.097 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:15.097 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.097 20:55:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.097 20:55:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.097 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.097 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:15.097 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.097 20:55:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.097 20:55:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:15.097 20:55:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.097 20:55:39 -- host/auth.sh@44 -- # digest=sha256 00:24:15.097 20:55:39 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:15.097 20:55:39 -- host/auth.sh@44 -- # keyid=4 00:24:15.097 20:55:39 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:15.097 20:55:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.097 20:55:39 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:15.097 20:55:39 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:15.097 20:55:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:24:15.097 20:55:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.097 20:55:39 -- host/auth.sh@68 -- # digest=sha256 00:24:15.097 20:55:39 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:15.097 20:55:39 -- host/auth.sh@68 -- # keyid=4 00:24:15.097 20:55:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:15.097 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.097 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:15.097 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.097 20:55:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.097 20:55:39 -- nvmf/common.sh@717 -- # local ip 00:24:15.097 20:55:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.097 20:55:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.097 20:55:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.097 20:55:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.097 20:55:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.097 20:55:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.097 20:55:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.097 20:55:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.097 20:55:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.097 20:55:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.097 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.097 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:15.358 nvme0n1 00:24:15.358 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.358 20:55:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.358 20:55:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.358 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.358 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:15.358 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.358 20:55:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.358 20:55:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.358 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.358 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:15.358 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.358 20:55:39 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.358 20:55:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.358 20:55:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:15.358 20:55:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.358 20:55:39 -- host/auth.sh@44 -- # digest=sha256 00:24:15.358 20:55:39 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.358 20:55:39 -- host/auth.sh@44 -- # keyid=0 00:24:15.358 20:55:39 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:15.358 20:55:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.358 20:55:39 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:15.358 20:55:39 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:15.358 20:55:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:24:15.358 20:55:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.358 20:55:39 -- host/auth.sh@68 -- # digest=sha256 00:24:15.358 20:55:39 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:15.358 20:55:39 -- host/auth.sh@68 -- # keyid=0 00:24:15.358 20:55:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:15.358 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.358 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:15.358 20:55:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.358 20:55:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.358 20:55:39 -- nvmf/common.sh@717 -- # local ip 00:24:15.358 20:55:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.358 20:55:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.358 20:55:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.358 20:55:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.358 20:55:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.358 20:55:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.358 20:55:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.358 20:55:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.358 20:55:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.358 20:55:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:15.358 20:55:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.358 20:55:39 -- common/autotest_common.sh@10 -- # set +x 00:24:15.619 nvme0n1 00:24:15.619 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.619 20:55:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.619 20:55:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.619 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.619 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:15.619 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.619 20:55:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.619 20:55:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.619 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.619 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:15.619 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.619 20:55:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.619 20:55:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:15.619 20:55:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.619 20:55:40 -- host/auth.sh@44 -- # digest=sha256 00:24:15.619 20:55:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.619 20:55:40 -- host/auth.sh@44 -- # keyid=1 00:24:15.619 20:55:40 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:15.619 20:55:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.619 20:55:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:15.619 20:55:40 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:15.619 20:55:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:24:15.619 20:55:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.619 20:55:40 -- host/auth.sh@68 -- # digest=sha256 00:24:15.619 20:55:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:15.619 20:55:40 -- host/auth.sh@68 -- # keyid=1 00:24:15.619 20:55:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:15.619 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.619 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:15.619 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.619 20:55:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.619 20:55:40 -- nvmf/common.sh@717 -- # local ip 00:24:15.619 20:55:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.619 20:55:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.619 20:55:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.619 20:55:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.619 20:55:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.619 20:55:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.619 20:55:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.619 20:55:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.619 20:55:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.619 20:55:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:15.619 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.619 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:15.880 nvme0n1 00:24:15.880 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.880 20:55:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.880 20:55:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:15.880 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.880 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:15.880 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.880 20:55:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.880 20:55:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.880 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.880 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:15.880 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.880 20:55:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:15.880 20:55:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:15.880 20:55:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:15.880 20:55:40 -- host/auth.sh@44 -- # digest=sha256 00:24:15.880 20:55:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.880 20:55:40 -- host/auth.sh@44 -- # keyid=2 00:24:15.880 20:55:40 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:15.880 20:55:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:15.880 20:55:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:15.880 20:55:40 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:15.880 20:55:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:24:15.880 20:55:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:15.880 20:55:40 -- host/auth.sh@68 -- # digest=sha256 00:24:15.880 20:55:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:15.880 20:55:40 -- host/auth.sh@68 -- # keyid=2 00:24:15.880 20:55:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:15.880 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.880 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:15.880 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:15.880 20:55:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:15.880 20:55:40 -- nvmf/common.sh@717 -- # local ip 00:24:15.880 20:55:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:15.880 20:55:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:15.880 20:55:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.880 20:55:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.880 20:55:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:15.880 20:55:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.880 20:55:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:15.880 20:55:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:15.880 20:55:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:15.880 20:55:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:15.880 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:15.880 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:16.141 nvme0n1 00:24:16.141 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.141 20:55:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.141 20:55:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.141 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.141 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:16.141 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.141 20:55:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.141 20:55:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.141 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.141 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:16.141 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.141 20:55:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.141 20:55:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:16.141 20:55:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.141 20:55:40 -- host/auth.sh@44 -- # digest=sha256 00:24:16.141 20:55:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:16.141 20:55:40 -- host/auth.sh@44 -- # keyid=3 00:24:16.141 20:55:40 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:16.141 20:55:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:16.141 20:55:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:16.141 20:55:40 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:16.141 20:55:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:24:16.141 20:55:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.141 20:55:40 -- host/auth.sh@68 -- # digest=sha256 00:24:16.141 20:55:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:16.141 20:55:40 -- host/auth.sh@68 -- # keyid=3 00:24:16.141 20:55:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:16.141 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.141 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:16.141 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.141 20:55:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.141 20:55:40 -- nvmf/common.sh@717 -- # local ip 00:24:16.141 20:55:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.141 20:55:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.141 20:55:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.141 20:55:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.141 20:55:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.141 20:55:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.141 20:55:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.141 20:55:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.141 20:55:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.141 20:55:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:16.141 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.141 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:16.401 nvme0n1 00:24:16.401 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.401 20:55:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.401 20:55:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.401 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.401 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:16.401 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.401 20:55:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.401 20:55:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.401 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.401 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:16.401 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.401 20:55:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.401 20:55:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:16.401 20:55:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.401 20:55:40 -- host/auth.sh@44 -- # digest=sha256 00:24:16.401 20:55:40 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:16.401 20:55:40 -- host/auth.sh@44 -- # keyid=4 00:24:16.401 20:55:40 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:16.401 20:55:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:16.401 20:55:40 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:16.401 20:55:40 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:16.401 20:55:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:24:16.401 20:55:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.401 20:55:40 -- host/auth.sh@68 -- # digest=sha256 00:24:16.401 20:55:40 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:16.401 20:55:40 -- host/auth.sh@68 -- # keyid=4 00:24:16.401 20:55:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:16.401 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.401 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:16.401 20:55:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.401 20:55:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.401 20:55:40 -- nvmf/common.sh@717 -- # local ip 00:24:16.401 20:55:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.401 20:55:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.401 20:55:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.401 20:55:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.401 20:55:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.401 20:55:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.401 20:55:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.401 20:55:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.401 20:55:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.401 20:55:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:16.401 20:55:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.401 20:55:40 -- common/autotest_common.sh@10 -- # set +x 00:24:16.662 nvme0n1 00:24:16.662 20:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.662 20:55:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.662 20:55:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.662 20:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.662 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:24:16.662 20:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.662 20:55:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.662 20:55:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.662 20:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.662 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:24:16.662 20:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.662 20:55:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:16.662 20:55:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.662 20:55:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:16.662 20:55:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.662 20:55:41 -- host/auth.sh@44 -- # digest=sha256 00:24:16.662 20:55:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.662 20:55:41 -- host/auth.sh@44 -- # keyid=0 00:24:16.662 20:55:41 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:16.662 20:55:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:16.662 20:55:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:16.662 20:55:41 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:16.662 20:55:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:24:16.662 20:55:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.662 20:55:41 -- host/auth.sh@68 -- # digest=sha256 00:24:16.662 20:55:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:16.662 20:55:41 -- host/auth.sh@68 -- # keyid=0 00:24:16.662 20:55:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:16.662 20:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.662 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:24:16.662 20:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.662 20:55:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.662 20:55:41 -- nvmf/common.sh@717 -- # local ip 00:24:16.662 20:55:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.662 20:55:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.662 20:55:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.662 20:55:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.662 20:55:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.662 20:55:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.662 20:55:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.662 20:55:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.662 20:55:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.662 20:55:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:16.662 20:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.662 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:24:16.922 nvme0n1 00:24:16.922 20:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.922 20:55:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.922 20:55:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:16.922 20:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.922 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:24:16.922 20:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.922 20:55:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.922 20:55:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.922 20:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.922 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:24:16.922 20:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.922 20:55:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:16.922 20:55:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:16.922 20:55:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:16.922 20:55:41 -- host/auth.sh@44 -- # digest=sha256 00:24:16.922 20:55:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.922 20:55:41 -- host/auth.sh@44 -- # keyid=1 00:24:16.922 20:55:41 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:16.922 20:55:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:16.922 20:55:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:16.922 20:55:41 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:16.922 20:55:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:24:16.922 20:55:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:16.923 20:55:41 -- host/auth.sh@68 -- # digest=sha256 00:24:16.923 20:55:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:16.923 20:55:41 -- host/auth.sh@68 -- # keyid=1 00:24:16.923 20:55:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:16.923 20:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.923 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:24:16.923 20:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:16.923 20:55:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:16.923 20:55:41 -- nvmf/common.sh@717 -- # local ip 00:24:16.923 20:55:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:16.923 20:55:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:16.923 20:55:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.923 20:55:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.923 20:55:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:16.923 20:55:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.923 20:55:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:16.923 20:55:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:16.923 20:55:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:16.923 20:55:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:16.923 20:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:16.923 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:24:17.183 nvme0n1 00:24:17.183 20:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.183 20:55:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.183 20:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.183 20:55:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.183 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:24:17.183 20:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.183 20:55:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.183 20:55:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.183 20:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.183 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:24:17.443 20:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.443 20:55:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.443 20:55:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:17.443 20:55:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.443 20:55:41 -- host/auth.sh@44 -- # digest=sha256 00:24:17.443 20:55:41 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:17.443 20:55:41 -- host/auth.sh@44 -- # keyid=2 00:24:17.443 20:55:41 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:17.443 20:55:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:17.443 20:55:41 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:17.443 20:55:41 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:17.443 20:55:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:24:17.443 20:55:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.443 20:55:41 -- host/auth.sh@68 -- # digest=sha256 00:24:17.443 20:55:41 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:17.443 20:55:41 -- host/auth.sh@68 -- # keyid=2 00:24:17.443 20:55:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:17.443 20:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.443 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:24:17.443 20:55:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.443 20:55:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.443 20:55:41 -- nvmf/common.sh@717 -- # local ip 00:24:17.443 20:55:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.443 20:55:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.443 20:55:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.443 20:55:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.443 20:55:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:17.443 20:55:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.443 20:55:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:17.443 20:55:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:17.443 20:55:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:17.443 20:55:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:17.443 20:55:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.443 20:55:41 -- common/autotest_common.sh@10 -- # set +x 00:24:17.704 nvme0n1 00:24:17.704 20:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.704 20:55:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.704 20:55:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.704 20:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.704 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:24:17.704 20:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.704 20:55:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.704 20:55:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.704 20:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.704 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:24:17.704 20:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.704 20:55:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.704 20:55:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:17.704 20:55:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.704 20:55:42 -- host/auth.sh@44 -- # digest=sha256 00:24:17.704 20:55:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:17.704 20:55:42 -- host/auth.sh@44 -- # keyid=3 00:24:17.704 20:55:42 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:17.704 20:55:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:17.704 20:55:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:17.704 20:55:42 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:17.704 20:55:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:24:17.704 20:55:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.704 20:55:42 -- host/auth.sh@68 -- # digest=sha256 00:24:17.704 20:55:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:17.704 20:55:42 -- host/auth.sh@68 -- # keyid=3 00:24:17.704 20:55:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:17.704 20:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.704 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:24:17.704 20:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.704 20:55:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.704 20:55:42 -- nvmf/common.sh@717 -- # local ip 00:24:17.704 20:55:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.704 20:55:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.704 20:55:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.704 20:55:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.704 20:55:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:17.704 20:55:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.704 20:55:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:17.704 20:55:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:17.704 20:55:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:17.704 20:55:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:17.704 20:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.704 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:24:17.964 nvme0n1 00:24:17.964 20:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.964 20:55:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.964 20:55:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:17.964 20:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.964 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:24:17.964 20:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.964 20:55:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.964 20:55:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.964 20:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.964 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:24:17.964 20:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.964 20:55:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:17.964 20:55:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:17.964 20:55:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:17.964 20:55:42 -- host/auth.sh@44 -- # digest=sha256 00:24:17.964 20:55:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:17.964 20:55:42 -- host/auth.sh@44 -- # keyid=4 00:24:17.964 20:55:42 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:17.964 20:55:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:17.964 20:55:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:17.964 20:55:42 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:17.964 20:55:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:24:17.964 20:55:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:17.964 20:55:42 -- host/auth.sh@68 -- # digest=sha256 00:24:17.964 20:55:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:17.964 20:55:42 -- host/auth.sh@68 -- # keyid=4 00:24:17.964 20:55:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:17.964 20:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.964 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:24:17.964 20:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:17.964 20:55:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:17.964 20:55:42 -- nvmf/common.sh@717 -- # local ip 00:24:17.964 20:55:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:17.964 20:55:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:17.965 20:55:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.965 20:55:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.965 20:55:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:17.965 20:55:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.965 20:55:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:17.965 20:55:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:17.965 20:55:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:17.965 20:55:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.965 20:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:17.965 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:24:18.225 nvme0n1 00:24:18.225 20:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.225 20:55:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.225 20:55:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.225 20:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.225 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:24:18.225 20:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.489 20:55:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.489 20:55:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.489 20:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.489 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:24:18.489 20:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.489 20:55:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.489 20:55:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:18.489 20:55:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:18.489 20:55:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:18.489 20:55:42 -- host/auth.sh@44 -- # digest=sha256 00:24:18.489 20:55:42 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.489 20:55:42 -- host/auth.sh@44 -- # keyid=0 00:24:18.489 20:55:42 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:18.490 20:55:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:18.490 20:55:42 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:18.490 20:55:42 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:18.490 20:55:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:24:18.490 20:55:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:18.490 20:55:42 -- host/auth.sh@68 -- # digest=sha256 00:24:18.490 20:55:42 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:18.490 20:55:42 -- host/auth.sh@68 -- # keyid=0 00:24:18.490 20:55:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:18.490 20:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.490 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:24:18.490 20:55:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.490 20:55:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:18.490 20:55:42 -- nvmf/common.sh@717 -- # local ip 00:24:18.490 20:55:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:18.490 20:55:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:18.490 20:55:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.490 20:55:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.490 20:55:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:18.490 20:55:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.490 20:55:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:18.490 20:55:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:18.490 20:55:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:18.490 20:55:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:18.490 20:55:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.490 20:55:42 -- common/autotest_common.sh@10 -- # set +x 00:24:18.749 nvme0n1 00:24:18.749 20:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:18.749 20:55:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.749 20:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:18.749 20:55:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:18.749 20:55:43 -- common/autotest_common.sh@10 -- # set +x 00:24:18.749 20:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.009 20:55:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.009 20:55:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.009 20:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.009 20:55:43 -- common/autotest_common.sh@10 -- # set +x 00:24:19.009 20:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.009 20:55:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.009 20:55:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:19.009 20:55:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.009 20:55:43 -- host/auth.sh@44 -- # digest=sha256 00:24:19.009 20:55:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:19.009 20:55:43 -- host/auth.sh@44 -- # keyid=1 00:24:19.009 20:55:43 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:19.009 20:55:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:19.009 20:55:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:19.009 20:55:43 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:19.009 20:55:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:24:19.009 20:55:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.009 20:55:43 -- host/auth.sh@68 -- # digest=sha256 00:24:19.009 20:55:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:19.009 20:55:43 -- host/auth.sh@68 -- # keyid=1 00:24:19.009 20:55:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:19.009 20:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.009 20:55:43 -- common/autotest_common.sh@10 -- # set +x 00:24:19.009 20:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.009 20:55:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.009 20:55:43 -- nvmf/common.sh@717 -- # local ip 00:24:19.009 20:55:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.009 20:55:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.009 20:55:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.009 20:55:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.009 20:55:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:19.009 20:55:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.009 20:55:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:19.009 20:55:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:19.009 20:55:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:19.009 20:55:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:19.009 20:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.009 20:55:43 -- common/autotest_common.sh@10 -- # set +x 00:24:19.269 nvme0n1 00:24:19.269 20:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.269 20:55:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.269 20:55:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:19.269 20:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.269 20:55:43 -- common/autotest_common.sh@10 -- # set +x 00:24:19.528 20:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.529 20:55:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.529 20:55:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.529 20:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.529 20:55:43 -- common/autotest_common.sh@10 -- # set +x 00:24:19.529 20:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.529 20:55:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:19.529 20:55:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:19.529 20:55:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:19.529 20:55:43 -- host/auth.sh@44 -- # digest=sha256 00:24:19.529 20:55:43 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:19.529 20:55:43 -- host/auth.sh@44 -- # keyid=2 00:24:19.529 20:55:43 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:19.529 20:55:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:19.529 20:55:43 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:19.529 20:55:43 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:19.529 20:55:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:24:19.529 20:55:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:19.529 20:55:43 -- host/auth.sh@68 -- # digest=sha256 00:24:19.529 20:55:43 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:19.529 20:55:43 -- host/auth.sh@68 -- # keyid=2 00:24:19.529 20:55:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:19.529 20:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.529 20:55:43 -- common/autotest_common.sh@10 -- # set +x 00:24:19.529 20:55:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:19.529 20:55:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:19.529 20:55:43 -- nvmf/common.sh@717 -- # local ip 00:24:19.529 20:55:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:19.529 20:55:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:19.529 20:55:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.529 20:55:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.529 20:55:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:19.529 20:55:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.529 20:55:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:19.529 20:55:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:19.529 20:55:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:19.529 20:55:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:19.529 20:55:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:19.529 20:55:43 -- common/autotest_common.sh@10 -- # set +x 00:24:20.102 nvme0n1 00:24:20.102 20:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.102 20:55:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.102 20:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.102 20:55:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.102 20:55:44 -- common/autotest_common.sh@10 -- # set +x 00:24:20.102 20:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.102 20:55:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.102 20:55:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.102 20:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.102 20:55:44 -- common/autotest_common.sh@10 -- # set +x 00:24:20.102 20:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.102 20:55:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.102 20:55:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:20.102 20:55:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.102 20:55:44 -- host/auth.sh@44 -- # digest=sha256 00:24:20.102 20:55:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:20.102 20:55:44 -- host/auth.sh@44 -- # keyid=3 00:24:20.102 20:55:44 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:20.102 20:55:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:20.102 20:55:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:20.102 20:55:44 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:20.102 20:55:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:24:20.102 20:55:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.102 20:55:44 -- host/auth.sh@68 -- # digest=sha256 00:24:20.102 20:55:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:20.102 20:55:44 -- host/auth.sh@68 -- # keyid=3 00:24:20.102 20:55:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:20.102 20:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.102 20:55:44 -- common/autotest_common.sh@10 -- # set +x 00:24:20.102 20:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.102 20:55:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.102 20:55:44 -- nvmf/common.sh@717 -- # local ip 00:24:20.102 20:55:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.102 20:55:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.102 20:55:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.102 20:55:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.102 20:55:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:20.102 20:55:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.102 20:55:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:20.102 20:55:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:20.102 20:55:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:20.102 20:55:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:20.102 20:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.102 20:55:44 -- common/autotest_common.sh@10 -- # set +x 00:24:20.364 nvme0n1 00:24:20.364 20:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.364 20:55:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.364 20:55:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.364 20:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.364 20:55:44 -- common/autotest_common.sh@10 -- # set +x 00:24:20.364 20:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.672 20:55:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.672 20:55:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.672 20:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.672 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:24:20.672 20:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.672 20:55:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:20.672 20:55:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:20.672 20:55:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:20.672 20:55:45 -- host/auth.sh@44 -- # digest=sha256 00:24:20.672 20:55:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:20.672 20:55:45 -- host/auth.sh@44 -- # keyid=4 00:24:20.672 20:55:45 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:20.672 20:55:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:20.672 20:55:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:20.672 20:55:45 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:20.672 20:55:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:24:20.672 20:55:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:20.672 20:55:45 -- host/auth.sh@68 -- # digest=sha256 00:24:20.672 20:55:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:20.672 20:55:45 -- host/auth.sh@68 -- # keyid=4 00:24:20.673 20:55:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:20.673 20:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.673 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:24:20.673 20:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.673 20:55:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:20.673 20:55:45 -- nvmf/common.sh@717 -- # local ip 00:24:20.673 20:55:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:20.673 20:55:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:20.673 20:55:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.673 20:55:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.673 20:55:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:20.673 20:55:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.673 20:55:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:20.673 20:55:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:20.673 20:55:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:20.673 20:55:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:20.673 20:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.673 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:24:20.933 nvme0n1 00:24:20.933 20:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.933 20:55:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.933 20:55:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:20.933 20:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.933 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:24:20.933 20:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:20.933 20:55:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.933 20:55:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.933 20:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:20.933 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:24:21.194 20:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.194 20:55:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:21.195 20:55:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:21.195 20:55:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:21.195 20:55:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:21.195 20:55:45 -- host/auth.sh@44 -- # digest=sha256 00:24:21.195 20:55:45 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.195 20:55:45 -- host/auth.sh@44 -- # keyid=0 00:24:21.195 20:55:45 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:21.195 20:55:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:21.195 20:55:45 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:21.195 20:55:45 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:21.195 20:55:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:24:21.195 20:55:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:21.195 20:55:45 -- host/auth.sh@68 -- # digest=sha256 00:24:21.195 20:55:45 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:21.195 20:55:45 -- host/auth.sh@68 -- # keyid=0 00:24:21.195 20:55:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:21.195 20:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.195 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:24:21.195 20:55:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.195 20:55:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:21.195 20:55:45 -- nvmf/common.sh@717 -- # local ip 00:24:21.195 20:55:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:21.195 20:55:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:21.195 20:55:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.195 20:55:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.195 20:55:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:21.195 20:55:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.195 20:55:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:21.195 20:55:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:21.195 20:55:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:21.195 20:55:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:21.195 20:55:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.195 20:55:45 -- common/autotest_common.sh@10 -- # set +x 00:24:21.767 nvme0n1 00:24:21.767 20:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.767 20:55:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.767 20:55:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:21.767 20:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.767 20:55:46 -- common/autotest_common.sh@10 -- # set +x 00:24:21.767 20:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:21.767 20:55:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.767 20:55:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.767 20:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:21.767 20:55:46 -- common/autotest_common.sh@10 -- # set +x 00:24:22.028 20:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.028 20:55:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:22.028 20:55:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:22.028 20:55:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:22.028 20:55:46 -- host/auth.sh@44 -- # digest=sha256 00:24:22.028 20:55:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.028 20:55:46 -- host/auth.sh@44 -- # keyid=1 00:24:22.028 20:55:46 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:22.028 20:55:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:22.028 20:55:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:22.028 20:55:46 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:22.028 20:55:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:24:22.028 20:55:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:22.028 20:55:46 -- host/auth.sh@68 -- # digest=sha256 00:24:22.028 20:55:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:22.028 20:55:46 -- host/auth.sh@68 -- # keyid=1 00:24:22.028 20:55:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:22.028 20:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.028 20:55:46 -- common/autotest_common.sh@10 -- # set +x 00:24:22.028 20:55:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.028 20:55:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:22.028 20:55:46 -- nvmf/common.sh@717 -- # local ip 00:24:22.028 20:55:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:22.028 20:55:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:22.028 20:55:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.028 20:55:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.028 20:55:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:22.028 20:55:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.028 20:55:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:22.028 20:55:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:22.028 20:55:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:22.028 20:55:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:22.028 20:55:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.028 20:55:46 -- common/autotest_common.sh@10 -- # set +x 00:24:22.600 nvme0n1 00:24:22.600 20:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.600 20:55:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.600 20:55:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:22.600 20:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.600 20:55:47 -- common/autotest_common.sh@10 -- # set +x 00:24:22.600 20:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.600 20:55:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.600 20:55:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.600 20:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.600 20:55:47 -- common/autotest_common.sh@10 -- # set +x 00:24:22.861 20:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.861 20:55:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:22.861 20:55:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:22.861 20:55:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:22.861 20:55:47 -- host/auth.sh@44 -- # digest=sha256 00:24:22.861 20:55:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:22.861 20:55:47 -- host/auth.sh@44 -- # keyid=2 00:24:22.861 20:55:47 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:22.861 20:55:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:22.861 20:55:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:22.861 20:55:47 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:22.861 20:55:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:24:22.861 20:55:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:22.861 20:55:47 -- host/auth.sh@68 -- # digest=sha256 00:24:22.861 20:55:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:22.861 20:55:47 -- host/auth.sh@68 -- # keyid=2 00:24:22.861 20:55:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:22.861 20:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.861 20:55:47 -- common/autotest_common.sh@10 -- # set +x 00:24:22.861 20:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:22.861 20:55:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:22.861 20:55:47 -- nvmf/common.sh@717 -- # local ip 00:24:22.861 20:55:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:22.861 20:55:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:22.861 20:55:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.861 20:55:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.861 20:55:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:22.861 20:55:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.861 20:55:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:22.861 20:55:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:22.861 20:55:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:22.861 20:55:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:22.861 20:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:22.861 20:55:47 -- common/autotest_common.sh@10 -- # set +x 00:24:23.432 nvme0n1 00:24:23.432 20:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.432 20:55:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.432 20:55:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.432 20:55:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:23.432 20:55:47 -- common/autotest_common.sh@10 -- # set +x 00:24:23.432 20:55:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.432 20:55:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.432 20:55:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.432 20:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.432 20:55:48 -- common/autotest_common.sh@10 -- # set +x 00:24:23.432 20:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.432 20:55:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:23.432 20:55:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:23.432 20:55:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:23.432 20:55:48 -- host/auth.sh@44 -- # digest=sha256 00:24:23.432 20:55:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:23.432 20:55:48 -- host/auth.sh@44 -- # keyid=3 00:24:23.432 20:55:48 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:23.432 20:55:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:23.432 20:55:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:23.432 20:55:48 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:23.432 20:55:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:24:23.432 20:55:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:23.432 20:55:48 -- host/auth.sh@68 -- # digest=sha256 00:24:23.432 20:55:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:23.432 20:55:48 -- host/auth.sh@68 -- # keyid=3 00:24:23.432 20:55:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:23.432 20:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.432 20:55:48 -- common/autotest_common.sh@10 -- # set +x 00:24:23.432 20:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:23.432 20:55:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:23.432 20:55:48 -- nvmf/common.sh@717 -- # local ip 00:24:23.432 20:55:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:23.432 20:55:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:23.432 20:55:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.432 20:55:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.432 20:55:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:23.432 20:55:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.432 20:55:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:23.432 20:55:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:23.432 20:55:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:23.432 20:55:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:23.432 20:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:23.432 20:55:48 -- common/autotest_common.sh@10 -- # set +x 00:24:24.372 nvme0n1 00:24:24.372 20:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.372 20:55:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.372 20:55:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:24.372 20:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.372 20:55:48 -- common/autotest_common.sh@10 -- # set +x 00:24:24.373 20:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.373 20:55:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.373 20:55:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.373 20:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.373 20:55:48 -- common/autotest_common.sh@10 -- # set +x 00:24:24.373 20:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.373 20:55:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:24.373 20:55:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:24.373 20:55:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:24.373 20:55:48 -- host/auth.sh@44 -- # digest=sha256 00:24:24.373 20:55:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:24.373 20:55:48 -- host/auth.sh@44 -- # keyid=4 00:24:24.373 20:55:48 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:24.373 20:55:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:24.373 20:55:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:24.373 20:55:48 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:24.373 20:55:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:24:24.373 20:55:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:24.373 20:55:48 -- host/auth.sh@68 -- # digest=sha256 00:24:24.373 20:55:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:24.373 20:55:48 -- host/auth.sh@68 -- # keyid=4 00:24:24.373 20:55:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:24.373 20:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.373 20:55:48 -- common/autotest_common.sh@10 -- # set +x 00:24:24.373 20:55:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:24.373 20:55:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:24.373 20:55:48 -- nvmf/common.sh@717 -- # local ip 00:24:24.373 20:55:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:24.373 20:55:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:24.373 20:55:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.373 20:55:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.373 20:55:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:24.373 20:55:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.373 20:55:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:24.373 20:55:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:24.373 20:55:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:24.373 20:55:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.373 20:55:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:24.373 20:55:48 -- common/autotest_common.sh@10 -- # set +x 00:24:25.313 nvme0n1 00:24:25.313 20:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.313 20:55:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.313 20:55:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.313 20:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.313 20:55:49 -- common/autotest_common.sh@10 -- # set +x 00:24:25.313 20:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.313 20:55:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.313 20:55:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.313 20:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.313 20:55:49 -- common/autotest_common.sh@10 -- # set +x 00:24:25.313 20:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.313 20:55:49 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:25.313 20:55:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.313 20:55:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.314 20:55:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:25.314 20:55:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.314 20:55:49 -- host/auth.sh@44 -- # digest=sha384 00:24:25.314 20:55:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:25.314 20:55:49 -- host/auth.sh@44 -- # keyid=0 00:24:25.314 20:55:49 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:25.314 20:55:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.314 20:55:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:25.314 20:55:49 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:25.314 20:55:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:24:25.314 20:55:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.314 20:55:49 -- host/auth.sh@68 -- # digest=sha384 00:24:25.314 20:55:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:25.314 20:55:49 -- host/auth.sh@68 -- # keyid=0 00:24:25.314 20:55:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:25.314 20:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.314 20:55:49 -- common/autotest_common.sh@10 -- # set +x 00:24:25.314 20:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.314 20:55:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.314 20:55:49 -- nvmf/common.sh@717 -- # local ip 00:24:25.314 20:55:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.314 20:55:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.314 20:55:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.314 20:55:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.314 20:55:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.314 20:55:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.314 20:55:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.314 20:55:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.314 20:55:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.314 20:55:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:25.314 20:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.314 20:55:49 -- common/autotest_common.sh@10 -- # set +x 00:24:25.314 nvme0n1 00:24:25.314 20:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.314 20:55:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.314 20:55:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.314 20:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.314 20:55:49 -- common/autotest_common.sh@10 -- # set +x 00:24:25.314 20:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.314 20:55:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.314 20:55:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.314 20:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.314 20:55:49 -- common/autotest_common.sh@10 -- # set +x 00:24:25.314 20:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.314 20:55:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.314 20:55:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:25.314 20:55:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.314 20:55:49 -- host/auth.sh@44 -- # digest=sha384 00:24:25.314 20:55:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:25.314 20:55:49 -- host/auth.sh@44 -- # keyid=1 00:24:25.314 20:55:49 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:25.314 20:55:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.314 20:55:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:25.314 20:55:49 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:25.314 20:55:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:24:25.314 20:55:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.314 20:55:49 -- host/auth.sh@68 -- # digest=sha384 00:24:25.314 20:55:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:25.314 20:55:49 -- host/auth.sh@68 -- # keyid=1 00:24:25.314 20:55:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:25.314 20:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.314 20:55:49 -- common/autotest_common.sh@10 -- # set +x 00:24:25.314 20:55:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.314 20:55:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.314 20:55:49 -- nvmf/common.sh@717 -- # local ip 00:24:25.314 20:55:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.314 20:55:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.314 20:55:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.314 20:55:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.314 20:55:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.314 20:55:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.314 20:55:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.314 20:55:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.314 20:55:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.314 20:55:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:25.314 20:55:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.314 20:55:49 -- common/autotest_common.sh@10 -- # set +x 00:24:25.574 nvme0n1 00:24:25.574 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.574 20:55:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.574 20:55:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.574 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.574 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:25.574 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.574 20:55:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.574 20:55:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.574 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.574 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:25.574 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.574 20:55:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.574 20:55:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:25.574 20:55:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.574 20:55:50 -- host/auth.sh@44 -- # digest=sha384 00:24:25.574 20:55:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:25.574 20:55:50 -- host/auth.sh@44 -- # keyid=2 00:24:25.574 20:55:50 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:25.574 20:55:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.574 20:55:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:25.574 20:55:50 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:25.574 20:55:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:24:25.574 20:55:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.574 20:55:50 -- host/auth.sh@68 -- # digest=sha384 00:24:25.574 20:55:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:25.574 20:55:50 -- host/auth.sh@68 -- # keyid=2 00:24:25.574 20:55:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:25.574 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.574 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:25.574 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.574 20:55:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.574 20:55:50 -- nvmf/common.sh@717 -- # local ip 00:24:25.574 20:55:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.575 20:55:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.575 20:55:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.575 20:55:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.575 20:55:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.575 20:55:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.575 20:55:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.575 20:55:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.575 20:55:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.575 20:55:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:25.575 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.575 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:25.835 nvme0n1 00:24:25.835 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.835 20:55:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.835 20:55:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:25.835 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.835 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:25.835 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.835 20:55:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.835 20:55:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.835 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.835 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:25.835 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.835 20:55:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:25.835 20:55:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:25.835 20:55:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:25.835 20:55:50 -- host/auth.sh@44 -- # digest=sha384 00:24:25.835 20:55:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:25.835 20:55:50 -- host/auth.sh@44 -- # keyid=3 00:24:25.835 20:55:50 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:25.835 20:55:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:25.835 20:55:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:25.835 20:55:50 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:25.835 20:55:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:24:25.835 20:55:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:25.835 20:55:50 -- host/auth.sh@68 -- # digest=sha384 00:24:25.835 20:55:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:25.835 20:55:50 -- host/auth.sh@68 -- # keyid=3 00:24:25.835 20:55:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:25.835 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.835 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:25.835 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.835 20:55:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:25.835 20:55:50 -- nvmf/common.sh@717 -- # local ip 00:24:25.835 20:55:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:25.835 20:55:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:25.835 20:55:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.835 20:55:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.835 20:55:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:25.835 20:55:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.835 20:55:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:25.835 20:55:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:25.835 20:55:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:25.835 20:55:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:25.835 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.835 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:26.095 nvme0n1 00:24:26.095 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.095 20:55:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.095 20:55:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.095 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.095 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:26.095 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.095 20:55:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.095 20:55:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.095 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.095 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:26.095 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.095 20:55:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.095 20:55:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:26.095 20:55:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.095 20:55:50 -- host/auth.sh@44 -- # digest=sha384 00:24:26.095 20:55:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:26.095 20:55:50 -- host/auth.sh@44 -- # keyid=4 00:24:26.095 20:55:50 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:26.095 20:55:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:26.095 20:55:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:26.095 20:55:50 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:26.095 20:55:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:24:26.095 20:55:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.095 20:55:50 -- host/auth.sh@68 -- # digest=sha384 00:24:26.095 20:55:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:26.095 20:55:50 -- host/auth.sh@68 -- # keyid=4 00:24:26.095 20:55:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:26.095 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.095 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:26.095 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.095 20:55:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.095 20:55:50 -- nvmf/common.sh@717 -- # local ip 00:24:26.095 20:55:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.095 20:55:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.095 20:55:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.095 20:55:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.095 20:55:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.095 20:55:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.095 20:55:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.095 20:55:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.095 20:55:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.095 20:55:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.095 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.095 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:26.355 nvme0n1 00:24:26.355 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.355 20:55:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.355 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.355 20:55:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.355 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:26.355 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.355 20:55:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.355 20:55:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.355 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.355 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:26.355 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.355 20:55:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.355 20:55:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.355 20:55:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:26.355 20:55:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.355 20:55:50 -- host/auth.sh@44 -- # digest=sha384 00:24:26.355 20:55:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:26.355 20:55:50 -- host/auth.sh@44 -- # keyid=0 00:24:26.355 20:55:50 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:26.355 20:55:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:26.355 20:55:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:26.355 20:55:50 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:26.355 20:55:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:24:26.355 20:55:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.355 20:55:50 -- host/auth.sh@68 -- # digest=sha384 00:24:26.355 20:55:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:26.355 20:55:50 -- host/auth.sh@68 -- # keyid=0 00:24:26.355 20:55:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:26.355 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.355 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:26.355 20:55:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.355 20:55:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.355 20:55:50 -- nvmf/common.sh@717 -- # local ip 00:24:26.355 20:55:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.355 20:55:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.355 20:55:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.355 20:55:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.355 20:55:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.355 20:55:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.355 20:55:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.355 20:55:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.355 20:55:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.355 20:55:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:26.355 20:55:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.355 20:55:50 -- common/autotest_common.sh@10 -- # set +x 00:24:26.614 nvme0n1 00:24:26.615 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.615 20:55:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.615 20:55:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.615 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.615 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:26.615 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.615 20:55:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.615 20:55:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.615 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.615 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:26.615 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.615 20:55:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.615 20:55:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:26.615 20:55:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.615 20:55:51 -- host/auth.sh@44 -- # digest=sha384 00:24:26.615 20:55:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:26.615 20:55:51 -- host/auth.sh@44 -- # keyid=1 00:24:26.615 20:55:51 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:26.615 20:55:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:26.615 20:55:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:26.615 20:55:51 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:26.615 20:55:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:24:26.615 20:55:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.615 20:55:51 -- host/auth.sh@68 -- # digest=sha384 00:24:26.615 20:55:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:26.615 20:55:51 -- host/auth.sh@68 -- # keyid=1 00:24:26.615 20:55:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:26.615 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.615 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:26.615 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.615 20:55:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.615 20:55:51 -- nvmf/common.sh@717 -- # local ip 00:24:26.615 20:55:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.615 20:55:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.615 20:55:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.615 20:55:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.615 20:55:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.615 20:55:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.615 20:55:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.615 20:55:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.615 20:55:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.615 20:55:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:26.615 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.615 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:26.875 nvme0n1 00:24:26.875 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.875 20:55:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.875 20:55:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:26.875 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.875 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:26.875 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.875 20:55:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.875 20:55:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.875 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.875 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:26.875 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.875 20:55:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:26.875 20:55:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:26.875 20:55:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:26.875 20:55:51 -- host/auth.sh@44 -- # digest=sha384 00:24:26.875 20:55:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:26.875 20:55:51 -- host/auth.sh@44 -- # keyid=2 00:24:26.875 20:55:51 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:26.875 20:55:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:26.875 20:55:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:26.875 20:55:51 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:26.875 20:55:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:24:26.875 20:55:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:26.875 20:55:51 -- host/auth.sh@68 -- # digest=sha384 00:24:26.875 20:55:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:26.875 20:55:51 -- host/auth.sh@68 -- # keyid=2 00:24:26.875 20:55:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:26.875 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.875 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:26.875 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.875 20:55:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:26.875 20:55:51 -- nvmf/common.sh@717 -- # local ip 00:24:26.875 20:55:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:26.875 20:55:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:26.875 20:55:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.875 20:55:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.875 20:55:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:26.875 20:55:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.875 20:55:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:26.875 20:55:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:26.875 20:55:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:26.875 20:55:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:26.875 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.875 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:27.136 nvme0n1 00:24:27.136 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.136 20:55:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.136 20:55:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.136 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.136 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:27.136 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.136 20:55:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.136 20:55:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.136 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.136 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:27.136 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.136 20:55:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.136 20:55:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:27.136 20:55:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.136 20:55:51 -- host/auth.sh@44 -- # digest=sha384 00:24:27.136 20:55:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:27.136 20:55:51 -- host/auth.sh@44 -- # keyid=3 00:24:27.136 20:55:51 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:27.136 20:55:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:27.136 20:55:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:27.136 20:55:51 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:27.136 20:55:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:24:27.136 20:55:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.136 20:55:51 -- host/auth.sh@68 -- # digest=sha384 00:24:27.136 20:55:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:27.136 20:55:51 -- host/auth.sh@68 -- # keyid=3 00:24:27.136 20:55:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:27.136 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.136 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:27.136 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.136 20:55:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.136 20:55:51 -- nvmf/common.sh@717 -- # local ip 00:24:27.136 20:55:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.136 20:55:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.136 20:55:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.136 20:55:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.136 20:55:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.136 20:55:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.136 20:55:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.136 20:55:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.136 20:55:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.136 20:55:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:27.136 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.136 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:27.399 nvme0n1 00:24:27.399 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.399 20:55:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.399 20:55:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.399 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.399 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:27.399 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.399 20:55:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.399 20:55:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.399 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.399 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:27.399 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.399 20:55:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.399 20:55:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:27.399 20:55:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.399 20:55:51 -- host/auth.sh@44 -- # digest=sha384 00:24:27.399 20:55:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:27.399 20:55:51 -- host/auth.sh@44 -- # keyid=4 00:24:27.399 20:55:51 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:27.399 20:55:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:27.399 20:55:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:27.399 20:55:51 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:27.399 20:55:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:24:27.399 20:55:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.399 20:55:51 -- host/auth.sh@68 -- # digest=sha384 00:24:27.399 20:55:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:27.399 20:55:51 -- host/auth.sh@68 -- # keyid=4 00:24:27.399 20:55:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:27.399 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.399 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:27.399 20:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.399 20:55:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.399 20:55:51 -- nvmf/common.sh@717 -- # local ip 00:24:27.399 20:55:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.399 20:55:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.399 20:55:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.399 20:55:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.399 20:55:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.399 20:55:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.399 20:55:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.399 20:55:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.399 20:55:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.399 20:55:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:27.399 20:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.399 20:55:51 -- common/autotest_common.sh@10 -- # set +x 00:24:27.660 nvme0n1 00:24:27.660 20:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.660 20:55:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.660 20:55:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.660 20:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.660 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:24:27.660 20:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.660 20:55:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.660 20:55:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.660 20:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.660 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:24:27.660 20:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.660 20:55:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:27.660 20:55:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.660 20:55:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:27.660 20:55:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.660 20:55:52 -- host/auth.sh@44 -- # digest=sha384 00:24:27.660 20:55:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.660 20:55:52 -- host/auth.sh@44 -- # keyid=0 00:24:27.660 20:55:52 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:27.660 20:55:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:27.660 20:55:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:27.660 20:55:52 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:27.660 20:55:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:24:27.660 20:55:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.660 20:55:52 -- host/auth.sh@68 -- # digest=sha384 00:24:27.660 20:55:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:27.660 20:55:52 -- host/auth.sh@68 -- # keyid=0 00:24:27.660 20:55:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:27.660 20:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.660 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:24:27.660 20:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.660 20:55:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.660 20:55:52 -- nvmf/common.sh@717 -- # local ip 00:24:27.660 20:55:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.661 20:55:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.661 20:55:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.661 20:55:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.661 20:55:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.661 20:55:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.661 20:55:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.661 20:55:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.661 20:55:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.661 20:55:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:27.661 20:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.661 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:24:27.921 nvme0n1 00:24:27.921 20:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.921 20:55:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.921 20:55:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:27.921 20:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.921 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:24:27.921 20:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.921 20:55:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.921 20:55:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.921 20:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.921 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:24:27.921 20:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.921 20:55:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:27.921 20:55:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:27.921 20:55:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:27.921 20:55:52 -- host/auth.sh@44 -- # digest=sha384 00:24:27.921 20:55:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:27.921 20:55:52 -- host/auth.sh@44 -- # keyid=1 00:24:27.921 20:55:52 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:27.921 20:55:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:27.921 20:55:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:27.921 20:55:52 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:27.921 20:55:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:24:27.921 20:55:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:27.921 20:55:52 -- host/auth.sh@68 -- # digest=sha384 00:24:27.921 20:55:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:27.921 20:55:52 -- host/auth.sh@68 -- # keyid=1 00:24:27.921 20:55:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:27.921 20:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.921 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:24:27.921 20:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:27.921 20:55:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:27.921 20:55:52 -- nvmf/common.sh@717 -- # local ip 00:24:27.921 20:55:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:27.921 20:55:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:27.921 20:55:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.921 20:55:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.921 20:55:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:27.921 20:55:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.921 20:55:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:27.921 20:55:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:27.921 20:55:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:27.921 20:55:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:27.921 20:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:27.921 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:24:28.490 nvme0n1 00:24:28.490 20:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.490 20:55:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.490 20:55:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.490 20:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.490 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:24:28.490 20:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.490 20:55:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.490 20:55:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.490 20:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.490 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:24:28.490 20:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.490 20:55:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.490 20:55:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:28.490 20:55:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.490 20:55:52 -- host/auth.sh@44 -- # digest=sha384 00:24:28.490 20:55:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:28.490 20:55:52 -- host/auth.sh@44 -- # keyid=2 00:24:28.490 20:55:52 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:28.490 20:55:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:28.490 20:55:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:28.490 20:55:52 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:28.490 20:55:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:24:28.490 20:55:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.490 20:55:52 -- host/auth.sh@68 -- # digest=sha384 00:24:28.490 20:55:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:28.490 20:55:52 -- host/auth.sh@68 -- # keyid=2 00:24:28.490 20:55:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:28.490 20:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.490 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:24:28.490 20:55:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.490 20:55:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.490 20:55:52 -- nvmf/common.sh@717 -- # local ip 00:24:28.490 20:55:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.490 20:55:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.490 20:55:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.490 20:55:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.490 20:55:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.490 20:55:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.490 20:55:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.490 20:55:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.490 20:55:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.490 20:55:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:28.490 20:55:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.490 20:55:52 -- common/autotest_common.sh@10 -- # set +x 00:24:28.750 nvme0n1 00:24:28.750 20:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.750 20:55:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.750 20:55:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:28.750 20:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.750 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:24:28.750 20:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.750 20:55:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.750 20:55:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.750 20:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.750 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:24:28.750 20:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.750 20:55:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:28.750 20:55:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:28.750 20:55:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:28.750 20:55:53 -- host/auth.sh@44 -- # digest=sha384 00:24:28.750 20:55:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:28.750 20:55:53 -- host/auth.sh@44 -- # keyid=3 00:24:28.750 20:55:53 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:28.750 20:55:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:28.750 20:55:53 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:28.750 20:55:53 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:28.750 20:55:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:24:28.750 20:55:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:28.750 20:55:53 -- host/auth.sh@68 -- # digest=sha384 00:24:28.750 20:55:53 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:28.750 20:55:53 -- host/auth.sh@68 -- # keyid=3 00:24:28.750 20:55:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:28.750 20:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.750 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:24:28.750 20:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.750 20:55:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:28.750 20:55:53 -- nvmf/common.sh@717 -- # local ip 00:24:28.750 20:55:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:28.750 20:55:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:28.750 20:55:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.750 20:55:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.750 20:55:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:28.750 20:55:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.750 20:55:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:28.750 20:55:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:28.750 20:55:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:28.750 20:55:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:28.750 20:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.750 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:24:29.011 nvme0n1 00:24:29.011 20:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.011 20:55:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.011 20:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.011 20:55:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.011 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:24:29.011 20:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.011 20:55:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.011 20:55:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.011 20:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.011 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:24:29.011 20:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.011 20:55:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.011 20:55:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:29.011 20:55:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.011 20:55:53 -- host/auth.sh@44 -- # digest=sha384 00:24:29.011 20:55:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:29.011 20:55:53 -- host/auth.sh@44 -- # keyid=4 00:24:29.011 20:55:53 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:29.011 20:55:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:29.011 20:55:53 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:29.011 20:55:53 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:29.011 20:55:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:24:29.011 20:55:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.011 20:55:53 -- host/auth.sh@68 -- # digest=sha384 00:24:29.011 20:55:53 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:29.011 20:55:53 -- host/auth.sh@68 -- # keyid=4 00:24:29.011 20:55:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:29.011 20:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.011 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:24:29.011 20:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.011 20:55:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.011 20:55:53 -- nvmf/common.sh@717 -- # local ip 00:24:29.011 20:55:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.011 20:55:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.011 20:55:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.011 20:55:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.011 20:55:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.011 20:55:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.011 20:55:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.011 20:55:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.011 20:55:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.011 20:55:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.011 20:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.011 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:24:29.271 nvme0n1 00:24:29.271 20:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.271 20:55:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.271 20:55:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:29.271 20:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.271 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:24:29.530 20:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.530 20:55:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.530 20:55:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.530 20:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.530 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:24:29.530 20:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.530 20:55:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:29.530 20:55:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:29.530 20:55:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:29.530 20:55:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:29.530 20:55:53 -- host/auth.sh@44 -- # digest=sha384 00:24:29.530 20:55:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:29.530 20:55:53 -- host/auth.sh@44 -- # keyid=0 00:24:29.530 20:55:53 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:29.530 20:55:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:29.530 20:55:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:29.530 20:55:53 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:29.530 20:55:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:24:29.530 20:55:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:29.530 20:55:53 -- host/auth.sh@68 -- # digest=sha384 00:24:29.530 20:55:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:29.530 20:55:53 -- host/auth.sh@68 -- # keyid=0 00:24:29.530 20:55:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:29.530 20:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.530 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:24:29.530 20:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:29.530 20:55:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:29.530 20:55:53 -- nvmf/common.sh@717 -- # local ip 00:24:29.530 20:55:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:29.530 20:55:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:29.530 20:55:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.530 20:55:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.530 20:55:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:29.530 20:55:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.530 20:55:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:29.530 20:55:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:29.530 20:55:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:29.530 20:55:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:29.530 20:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:29.530 20:55:53 -- common/autotest_common.sh@10 -- # set +x 00:24:30.099 nvme0n1 00:24:30.099 20:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.099 20:55:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.099 20:55:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.099 20:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.099 20:55:54 -- common/autotest_common.sh@10 -- # set +x 00:24:30.099 20:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.099 20:55:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.099 20:55:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.099 20:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.099 20:55:54 -- common/autotest_common.sh@10 -- # set +x 00:24:30.099 20:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.099 20:55:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:30.099 20:55:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:30.099 20:55:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:30.099 20:55:54 -- host/auth.sh@44 -- # digest=sha384 00:24:30.099 20:55:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:30.099 20:55:54 -- host/auth.sh@44 -- # keyid=1 00:24:30.099 20:55:54 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:30.099 20:55:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:30.099 20:55:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:30.100 20:55:54 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:30.100 20:55:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:24:30.100 20:55:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:30.100 20:55:54 -- host/auth.sh@68 -- # digest=sha384 00:24:30.100 20:55:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:30.100 20:55:54 -- host/auth.sh@68 -- # keyid=1 00:24:30.100 20:55:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:30.100 20:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.100 20:55:54 -- common/autotest_common.sh@10 -- # set +x 00:24:30.100 20:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.100 20:55:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:30.100 20:55:54 -- nvmf/common.sh@717 -- # local ip 00:24:30.100 20:55:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:30.100 20:55:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:30.100 20:55:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.100 20:55:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.100 20:55:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:30.100 20:55:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.100 20:55:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:30.100 20:55:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:30.100 20:55:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:30.100 20:55:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:30.100 20:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.100 20:55:54 -- common/autotest_common.sh@10 -- # set +x 00:24:30.363 nvme0n1 00:24:30.363 20:55:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.363 20:55:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.363 20:55:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:30.363 20:55:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.363 20:55:54 -- common/autotest_common.sh@10 -- # set +x 00:24:30.634 20:55:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.634 20:55:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.634 20:55:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.634 20:55:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.634 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:24:30.634 20:55:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.634 20:55:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:30.634 20:55:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:30.634 20:55:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:30.634 20:55:55 -- host/auth.sh@44 -- # digest=sha384 00:24:30.634 20:55:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:30.634 20:55:55 -- host/auth.sh@44 -- # keyid=2 00:24:30.634 20:55:55 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:30.634 20:55:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:30.634 20:55:55 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:30.634 20:55:55 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:30.634 20:55:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:24:30.634 20:55:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:30.634 20:55:55 -- host/auth.sh@68 -- # digest=sha384 00:24:30.634 20:55:55 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:30.634 20:55:55 -- host/auth.sh@68 -- # keyid=2 00:24:30.634 20:55:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:30.634 20:55:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.634 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:24:30.634 20:55:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.634 20:55:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:30.634 20:55:55 -- nvmf/common.sh@717 -- # local ip 00:24:30.634 20:55:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:30.634 20:55:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:30.634 20:55:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.634 20:55:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.634 20:55:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:30.634 20:55:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.634 20:55:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:30.634 20:55:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:30.634 20:55:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:30.634 20:55:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:30.634 20:55:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.634 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:24:30.893 nvme0n1 00:24:31.154 20:55:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.154 20:55:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.154 20:55:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:31.154 20:55:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.154 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:24:31.154 20:55:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.154 20:55:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.154 20:55:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.154 20:55:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.154 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:24:31.154 20:55:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.154 20:55:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.154 20:55:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:31.154 20:55:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.154 20:55:55 -- host/auth.sh@44 -- # digest=sha384 00:24:31.154 20:55:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:31.154 20:55:55 -- host/auth.sh@44 -- # keyid=3 00:24:31.154 20:55:55 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:31.154 20:55:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:31.154 20:55:55 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:31.154 20:55:55 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:31.154 20:55:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:24:31.154 20:55:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.154 20:55:55 -- host/auth.sh@68 -- # digest=sha384 00:24:31.154 20:55:55 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:31.154 20:55:55 -- host/auth.sh@68 -- # keyid=3 00:24:31.154 20:55:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:31.154 20:55:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.154 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:24:31.154 20:55:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.154 20:55:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.154 20:55:55 -- nvmf/common.sh@717 -- # local ip 00:24:31.154 20:55:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.154 20:55:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.154 20:55:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.154 20:55:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.154 20:55:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.154 20:55:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.154 20:55:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.154 20:55:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.154 20:55:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:31.154 20:55:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:31.154 20:55:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.154 20:55:55 -- common/autotest_common.sh@10 -- # set +x 00:24:31.414 nvme0n1 00:24:31.414 20:55:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.414 20:55:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.414 20:55:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.414 20:55:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:31.414 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:24:31.414 20:55:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.675 20:55:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.675 20:55:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.675 20:55:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.675 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:24:31.675 20:55:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.675 20:55:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:31.675 20:55:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:31.675 20:55:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:31.675 20:55:56 -- host/auth.sh@44 -- # digest=sha384 00:24:31.675 20:55:56 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:31.675 20:55:56 -- host/auth.sh@44 -- # keyid=4 00:24:31.675 20:55:56 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:31.675 20:55:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:31.675 20:55:56 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:31.675 20:55:56 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:31.675 20:55:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:24:31.675 20:55:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:31.675 20:55:56 -- host/auth.sh@68 -- # digest=sha384 00:24:31.675 20:55:56 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:31.675 20:55:56 -- host/auth.sh@68 -- # keyid=4 00:24:31.675 20:55:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:31.675 20:55:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.675 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:24:31.675 20:55:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.675 20:55:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:31.675 20:55:56 -- nvmf/common.sh@717 -- # local ip 00:24:31.675 20:55:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:31.675 20:55:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:31.675 20:55:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.675 20:55:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.675 20:55:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:31.675 20:55:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.675 20:55:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:31.675 20:55:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:31.675 20:55:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:31.675 20:55:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.675 20:55:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.675 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:24:32.246 nvme0n1 00:24:32.246 20:55:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.246 20:55:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.246 20:55:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.246 20:55:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.246 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:24:32.246 20:55:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.246 20:55:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.246 20:55:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.246 20:55:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.246 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:24:32.246 20:55:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.246 20:55:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.246 20:55:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:32.246 20:55:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:32.246 20:55:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:32.246 20:55:56 -- host/auth.sh@44 -- # digest=sha384 00:24:32.246 20:55:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:32.246 20:55:56 -- host/auth.sh@44 -- # keyid=0 00:24:32.246 20:55:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:32.246 20:55:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:32.246 20:55:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:32.246 20:55:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:32.246 20:55:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:24:32.246 20:55:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:32.246 20:55:56 -- host/auth.sh@68 -- # digest=sha384 00:24:32.246 20:55:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:32.246 20:55:56 -- host/auth.sh@68 -- # keyid=0 00:24:32.246 20:55:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:32.246 20:55:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.246 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:24:32.246 20:55:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.246 20:55:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:32.246 20:55:56 -- nvmf/common.sh@717 -- # local ip 00:24:32.246 20:55:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:32.246 20:55:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:32.246 20:55:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.246 20:55:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.246 20:55:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:32.246 20:55:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.246 20:55:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:32.246 20:55:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:32.246 20:55:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:32.246 20:55:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:32.247 20:55:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.247 20:55:56 -- common/autotest_common.sh@10 -- # set +x 00:24:32.817 nvme0n1 00:24:32.817 20:55:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:32.817 20:55:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.817 20:55:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:32.817 20:55:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:32.817 20:55:57 -- common/autotest_common.sh@10 -- # set +x 00:24:32.817 20:55:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.078 20:55:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.078 20:55:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.078 20:55:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.078 20:55:57 -- common/autotest_common.sh@10 -- # set +x 00:24:33.078 20:55:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.078 20:55:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.078 20:55:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:33.078 20:55:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.078 20:55:57 -- host/auth.sh@44 -- # digest=sha384 00:24:33.078 20:55:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.078 20:55:57 -- host/auth.sh@44 -- # keyid=1 00:24:33.078 20:55:57 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:33.078 20:55:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:33.078 20:55:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:33.078 20:55:57 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:33.078 20:55:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:24:33.078 20:55:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.078 20:55:57 -- host/auth.sh@68 -- # digest=sha384 00:24:33.078 20:55:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:33.078 20:55:57 -- host/auth.sh@68 -- # keyid=1 00:24:33.078 20:55:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:33.078 20:55:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.078 20:55:57 -- common/autotest_common.sh@10 -- # set +x 00:24:33.078 20:55:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.078 20:55:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.078 20:55:57 -- nvmf/common.sh@717 -- # local ip 00:24:33.078 20:55:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.078 20:55:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.078 20:55:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.078 20:55:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.078 20:55:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:33.078 20:55:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.078 20:55:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:33.078 20:55:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:33.078 20:55:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:33.078 20:55:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:33.078 20:55:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.078 20:55:57 -- common/autotest_common.sh@10 -- # set +x 00:24:33.648 nvme0n1 00:24:33.648 20:55:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.648 20:55:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.648 20:55:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.648 20:55:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:33.648 20:55:58 -- common/autotest_common.sh@10 -- # set +x 00:24:33.648 20:55:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.648 20:55:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.648 20:55:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.648 20:55:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.648 20:55:58 -- common/autotest_common.sh@10 -- # set +x 00:24:33.909 20:55:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.909 20:55:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:33.909 20:55:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:33.909 20:55:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:33.909 20:55:58 -- host/auth.sh@44 -- # digest=sha384 00:24:33.909 20:55:58 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:33.909 20:55:58 -- host/auth.sh@44 -- # keyid=2 00:24:33.909 20:55:58 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:33.909 20:55:58 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:33.909 20:55:58 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:33.909 20:55:58 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:33.910 20:55:58 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:24:33.910 20:55:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:33.910 20:55:58 -- host/auth.sh@68 -- # digest=sha384 00:24:33.910 20:55:58 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:33.910 20:55:58 -- host/auth.sh@68 -- # keyid=2 00:24:33.910 20:55:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:33.910 20:55:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.910 20:55:58 -- common/autotest_common.sh@10 -- # set +x 00:24:33.910 20:55:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:33.910 20:55:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:33.910 20:55:58 -- nvmf/common.sh@717 -- # local ip 00:24:33.910 20:55:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:33.910 20:55:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:33.910 20:55:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.910 20:55:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.910 20:55:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:33.910 20:55:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.910 20:55:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:33.910 20:55:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:33.910 20:55:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:33.910 20:55:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:33.910 20:55:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:33.910 20:55:58 -- common/autotest_common.sh@10 -- # set +x 00:24:34.483 nvme0n1 00:24:34.483 20:55:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.483 20:55:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.483 20:55:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:34.483 20:55:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.483 20:55:59 -- common/autotest_common.sh@10 -- # set +x 00:24:34.483 20:55:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.483 20:55:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.483 20:55:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.483 20:55:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.483 20:55:59 -- common/autotest_common.sh@10 -- # set +x 00:24:34.483 20:55:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.483 20:55:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:34.483 20:55:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:34.483 20:55:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:34.483 20:55:59 -- host/auth.sh@44 -- # digest=sha384 00:24:34.483 20:55:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:34.483 20:55:59 -- host/auth.sh@44 -- # keyid=3 00:24:34.483 20:55:59 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:34.483 20:55:59 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:34.483 20:55:59 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:34.483 20:55:59 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:34.483 20:55:59 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:24:34.483 20:55:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:34.483 20:55:59 -- host/auth.sh@68 -- # digest=sha384 00:24:34.484 20:55:59 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:34.484 20:55:59 -- host/auth.sh@68 -- # keyid=3 00:24:34.484 20:55:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:34.484 20:55:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.484 20:55:59 -- common/autotest_common.sh@10 -- # set +x 00:24:34.484 20:55:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:34.484 20:55:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:34.484 20:55:59 -- nvmf/common.sh@717 -- # local ip 00:24:34.484 20:55:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:34.484 20:55:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:34.484 20:55:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.484 20:55:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.484 20:55:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:34.484 20:55:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.484 20:55:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:34.484 20:55:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:34.484 20:55:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:34.484 20:55:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:34.484 20:55:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:34.484 20:55:59 -- common/autotest_common.sh@10 -- # set +x 00:24:35.426 nvme0n1 00:24:35.426 20:55:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.426 20:55:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.426 20:55:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:35.426 20:55:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.426 20:55:59 -- common/autotest_common.sh@10 -- # set +x 00:24:35.426 20:55:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.426 20:55:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.426 20:55:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.426 20:55:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.426 20:55:59 -- common/autotest_common.sh@10 -- # set +x 00:24:35.426 20:55:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.426 20:55:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:35.426 20:55:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:35.426 20:55:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:35.426 20:55:59 -- host/auth.sh@44 -- # digest=sha384 00:24:35.426 20:55:59 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:35.426 20:55:59 -- host/auth.sh@44 -- # keyid=4 00:24:35.426 20:55:59 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:35.426 20:55:59 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:35.426 20:55:59 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:35.426 20:55:59 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:35.426 20:55:59 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:24:35.426 20:55:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:35.426 20:55:59 -- host/auth.sh@68 -- # digest=sha384 00:24:35.426 20:55:59 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:35.426 20:55:59 -- host/auth.sh@68 -- # keyid=4 00:24:35.426 20:55:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:35.426 20:55:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.426 20:55:59 -- common/autotest_common.sh@10 -- # set +x 00:24:35.426 20:55:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:35.426 20:55:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:35.426 20:55:59 -- nvmf/common.sh@717 -- # local ip 00:24:35.426 20:55:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:35.426 20:55:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:35.426 20:55:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.426 20:55:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.426 20:55:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:35.426 20:55:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.426 20:55:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:35.426 20:55:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:35.426 20:55:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:35.426 20:55:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:35.426 20:55:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:35.426 20:55:59 -- common/autotest_common.sh@10 -- # set +x 00:24:36.370 nvme0n1 00:24:36.370 20:56:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.370 20:56:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.370 20:56:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.370 20:56:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.370 20:56:00 -- common/autotest_common.sh@10 -- # set +x 00:24:36.370 20:56:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.370 20:56:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.370 20:56:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.370 20:56:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.370 20:56:00 -- common/autotest_common.sh@10 -- # set +x 00:24:36.370 20:56:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.370 20:56:00 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:36.370 20:56:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:36.370 20:56:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.370 20:56:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:36.370 20:56:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.370 20:56:00 -- host/auth.sh@44 -- # digest=sha512 00:24:36.370 20:56:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.370 20:56:00 -- host/auth.sh@44 -- # keyid=0 00:24:36.370 20:56:00 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:36.370 20:56:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.370 20:56:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:36.370 20:56:00 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:36.370 20:56:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:24:36.370 20:56:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.370 20:56:00 -- host/auth.sh@68 -- # digest=sha512 00:24:36.370 20:56:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:36.370 20:56:00 -- host/auth.sh@68 -- # keyid=0 00:24:36.370 20:56:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:36.370 20:56:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.370 20:56:00 -- common/autotest_common.sh@10 -- # set +x 00:24:36.370 20:56:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.370 20:56:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.370 20:56:00 -- nvmf/common.sh@717 -- # local ip 00:24:36.370 20:56:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.370 20:56:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.370 20:56:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.370 20:56:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.370 20:56:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.370 20:56:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.370 20:56:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.370 20:56:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.370 20:56:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.370 20:56:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:36.370 20:56:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.370 20:56:00 -- common/autotest_common.sh@10 -- # set +x 00:24:36.370 nvme0n1 00:24:36.370 20:56:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.370 20:56:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.370 20:56:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.370 20:56:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.370 20:56:00 -- common/autotest_common.sh@10 -- # set +x 00:24:36.370 20:56:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.370 20:56:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.370 20:56:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.370 20:56:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.370 20:56:00 -- common/autotest_common.sh@10 -- # set +x 00:24:36.370 20:56:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.370 20:56:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.370 20:56:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:36.370 20:56:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.370 20:56:00 -- host/auth.sh@44 -- # digest=sha512 00:24:36.370 20:56:00 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.370 20:56:00 -- host/auth.sh@44 -- # keyid=1 00:24:36.370 20:56:00 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:36.370 20:56:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.370 20:56:00 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:36.370 20:56:00 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:36.370 20:56:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:24:36.370 20:56:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.370 20:56:00 -- host/auth.sh@68 -- # digest=sha512 00:24:36.370 20:56:00 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:36.370 20:56:00 -- host/auth.sh@68 -- # keyid=1 00:24:36.370 20:56:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:36.370 20:56:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.370 20:56:00 -- common/autotest_common.sh@10 -- # set +x 00:24:36.370 20:56:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.370 20:56:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.370 20:56:00 -- nvmf/common.sh@717 -- # local ip 00:24:36.370 20:56:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.370 20:56:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.370 20:56:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.370 20:56:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.370 20:56:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.370 20:56:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.370 20:56:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.370 20:56:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.371 20:56:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.371 20:56:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:36.371 20:56:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.371 20:56:00 -- common/autotest_common.sh@10 -- # set +x 00:24:36.632 nvme0n1 00:24:36.632 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.632 20:56:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.632 20:56:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.632 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.632 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:36.632 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.632 20:56:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.632 20:56:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.632 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.632 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:36.632 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.632 20:56:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.632 20:56:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:36.632 20:56:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.632 20:56:01 -- host/auth.sh@44 -- # digest=sha512 00:24:36.632 20:56:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.632 20:56:01 -- host/auth.sh@44 -- # keyid=2 00:24:36.632 20:56:01 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:36.632 20:56:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.632 20:56:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:36.632 20:56:01 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:36.632 20:56:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:24:36.632 20:56:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.632 20:56:01 -- host/auth.sh@68 -- # digest=sha512 00:24:36.632 20:56:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:36.632 20:56:01 -- host/auth.sh@68 -- # keyid=2 00:24:36.632 20:56:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:36.632 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.632 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:36.632 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.632 20:56:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.632 20:56:01 -- nvmf/common.sh@717 -- # local ip 00:24:36.632 20:56:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.632 20:56:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.632 20:56:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.632 20:56:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.632 20:56:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.632 20:56:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.632 20:56:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.632 20:56:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.632 20:56:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.632 20:56:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:36.632 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.632 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:36.894 nvme0n1 00:24:36.894 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.894 20:56:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.894 20:56:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:36.894 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.894 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:36.894 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.894 20:56:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.894 20:56:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.894 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.894 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:36.894 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.894 20:56:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:36.894 20:56:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:36.894 20:56:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:36.894 20:56:01 -- host/auth.sh@44 -- # digest=sha512 00:24:36.894 20:56:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:36.894 20:56:01 -- host/auth.sh@44 -- # keyid=3 00:24:36.894 20:56:01 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:36.894 20:56:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:36.894 20:56:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:36.894 20:56:01 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:36.894 20:56:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:24:36.894 20:56:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:36.894 20:56:01 -- host/auth.sh@68 -- # digest=sha512 00:24:36.894 20:56:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:36.894 20:56:01 -- host/auth.sh@68 -- # keyid=3 00:24:36.894 20:56:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:36.894 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.894 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:36.894 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:36.894 20:56:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:36.894 20:56:01 -- nvmf/common.sh@717 -- # local ip 00:24:36.894 20:56:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:36.894 20:56:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:36.894 20:56:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.894 20:56:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.894 20:56:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:36.894 20:56:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.894 20:56:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:36.894 20:56:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:36.894 20:56:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:36.894 20:56:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:36.894 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:36.894 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:37.155 nvme0n1 00:24:37.155 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.155 20:56:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.155 20:56:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.155 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.155 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:37.155 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.155 20:56:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.155 20:56:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.155 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.155 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:37.155 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.155 20:56:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.155 20:56:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:37.155 20:56:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.155 20:56:01 -- host/auth.sh@44 -- # digest=sha512 00:24:37.155 20:56:01 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:37.155 20:56:01 -- host/auth.sh@44 -- # keyid=4 00:24:37.155 20:56:01 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:37.155 20:56:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.155 20:56:01 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:37.155 20:56:01 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:37.155 20:56:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:24:37.155 20:56:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.155 20:56:01 -- host/auth.sh@68 -- # digest=sha512 00:24:37.155 20:56:01 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:37.155 20:56:01 -- host/auth.sh@68 -- # keyid=4 00:24:37.155 20:56:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:37.155 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.155 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:37.155 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.155 20:56:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.155 20:56:01 -- nvmf/common.sh@717 -- # local ip 00:24:37.155 20:56:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.155 20:56:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.155 20:56:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.155 20:56:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.155 20:56:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.155 20:56:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.155 20:56:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.155 20:56:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.155 20:56:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.155 20:56:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.155 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.155 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:37.416 nvme0n1 00:24:37.416 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.416 20:56:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.416 20:56:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.416 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.416 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:37.416 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.416 20:56:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.416 20:56:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.416 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.416 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:37.416 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.416 20:56:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:37.416 20:56:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.416 20:56:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:37.416 20:56:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.416 20:56:01 -- host/auth.sh@44 -- # digest=sha512 00:24:37.416 20:56:01 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:37.416 20:56:01 -- host/auth.sh@44 -- # keyid=0 00:24:37.416 20:56:01 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:37.416 20:56:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.416 20:56:01 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:37.416 20:56:01 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:37.416 20:56:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:24:37.416 20:56:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.416 20:56:01 -- host/auth.sh@68 -- # digest=sha512 00:24:37.416 20:56:01 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:37.416 20:56:01 -- host/auth.sh@68 -- # keyid=0 00:24:37.416 20:56:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:37.416 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.416 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:37.416 20:56:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.416 20:56:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.416 20:56:01 -- nvmf/common.sh@717 -- # local ip 00:24:37.416 20:56:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.416 20:56:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.416 20:56:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.417 20:56:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.417 20:56:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.417 20:56:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.417 20:56:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.417 20:56:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.417 20:56:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.417 20:56:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:37.417 20:56:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.417 20:56:01 -- common/autotest_common.sh@10 -- # set +x 00:24:37.678 nvme0n1 00:24:37.678 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.678 20:56:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.679 20:56:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.679 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.679 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:37.679 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.679 20:56:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.679 20:56:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.679 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.679 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:37.679 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.679 20:56:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.679 20:56:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:37.679 20:56:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.679 20:56:02 -- host/auth.sh@44 -- # digest=sha512 00:24:37.679 20:56:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:37.679 20:56:02 -- host/auth.sh@44 -- # keyid=1 00:24:37.679 20:56:02 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:37.679 20:56:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.679 20:56:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:37.679 20:56:02 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:37.679 20:56:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:24:37.679 20:56:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.679 20:56:02 -- host/auth.sh@68 -- # digest=sha512 00:24:37.679 20:56:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:37.679 20:56:02 -- host/auth.sh@68 -- # keyid=1 00:24:37.679 20:56:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:37.679 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.679 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:37.679 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.679 20:56:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.679 20:56:02 -- nvmf/common.sh@717 -- # local ip 00:24:37.679 20:56:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.679 20:56:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.679 20:56:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.679 20:56:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.679 20:56:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.679 20:56:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.679 20:56:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.679 20:56:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.679 20:56:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.679 20:56:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:37.679 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.679 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:37.939 nvme0n1 00:24:37.939 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.939 20:56:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.939 20:56:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:37.939 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.939 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:37.939 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.939 20:56:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.939 20:56:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.939 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.939 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:37.939 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.940 20:56:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:37.940 20:56:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:37.940 20:56:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:37.940 20:56:02 -- host/auth.sh@44 -- # digest=sha512 00:24:37.940 20:56:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:37.940 20:56:02 -- host/auth.sh@44 -- # keyid=2 00:24:37.940 20:56:02 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:37.940 20:56:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:37.940 20:56:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:37.940 20:56:02 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:37.940 20:56:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:24:37.940 20:56:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:37.940 20:56:02 -- host/auth.sh@68 -- # digest=sha512 00:24:37.940 20:56:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:37.940 20:56:02 -- host/auth.sh@68 -- # keyid=2 00:24:37.940 20:56:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:37.940 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.940 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:37.940 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:37.940 20:56:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:37.940 20:56:02 -- nvmf/common.sh@717 -- # local ip 00:24:37.940 20:56:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:37.940 20:56:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:37.940 20:56:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.940 20:56:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.940 20:56:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:37.940 20:56:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.940 20:56:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:37.940 20:56:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:37.940 20:56:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:37.940 20:56:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:37.940 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:37.940 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:38.201 nvme0n1 00:24:38.201 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.201 20:56:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.201 20:56:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.201 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.201 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:38.201 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.201 20:56:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.201 20:56:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.201 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.201 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:38.201 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.201 20:56:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.201 20:56:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:38.201 20:56:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.201 20:56:02 -- host/auth.sh@44 -- # digest=sha512 00:24:38.201 20:56:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.201 20:56:02 -- host/auth.sh@44 -- # keyid=3 00:24:38.201 20:56:02 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:38.201 20:56:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.201 20:56:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:38.201 20:56:02 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:38.201 20:56:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:24:38.201 20:56:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.201 20:56:02 -- host/auth.sh@68 -- # digest=sha512 00:24:38.201 20:56:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:38.201 20:56:02 -- host/auth.sh@68 -- # keyid=3 00:24:38.201 20:56:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:38.201 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.201 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:38.201 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.201 20:56:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.201 20:56:02 -- nvmf/common.sh@717 -- # local ip 00:24:38.201 20:56:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.201 20:56:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.201 20:56:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.201 20:56:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.201 20:56:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.201 20:56:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.201 20:56:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.201 20:56:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.201 20:56:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.201 20:56:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:38.201 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.201 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:38.463 nvme0n1 00:24:38.463 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.463 20:56:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.463 20:56:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.463 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.463 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:38.463 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.463 20:56:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.463 20:56:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.463 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.463 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:38.463 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.463 20:56:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.463 20:56:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:38.463 20:56:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.463 20:56:02 -- host/auth.sh@44 -- # digest=sha512 00:24:38.463 20:56:02 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:38.463 20:56:02 -- host/auth.sh@44 -- # keyid=4 00:24:38.463 20:56:02 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:38.463 20:56:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.463 20:56:02 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:38.463 20:56:02 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:38.463 20:56:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:24:38.463 20:56:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.463 20:56:02 -- host/auth.sh@68 -- # digest=sha512 00:24:38.463 20:56:02 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:38.463 20:56:02 -- host/auth.sh@68 -- # keyid=4 00:24:38.463 20:56:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:38.463 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.463 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:38.463 20:56:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.463 20:56:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.463 20:56:02 -- nvmf/common.sh@717 -- # local ip 00:24:38.463 20:56:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.463 20:56:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.463 20:56:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.463 20:56:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.463 20:56:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.463 20:56:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.463 20:56:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.463 20:56:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.463 20:56:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.463 20:56:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:38.463 20:56:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.463 20:56:02 -- common/autotest_common.sh@10 -- # set +x 00:24:38.724 nvme0n1 00:24:38.724 20:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.724 20:56:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.724 20:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.724 20:56:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.724 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:24:38.724 20:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.724 20:56:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.724 20:56:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.724 20:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.724 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:24:38.724 20:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.724 20:56:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.724 20:56:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.724 20:56:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:38.724 20:56:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.724 20:56:03 -- host/auth.sh@44 -- # digest=sha512 00:24:38.724 20:56:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:38.724 20:56:03 -- host/auth.sh@44 -- # keyid=0 00:24:38.724 20:56:03 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:38.724 20:56:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.724 20:56:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:38.724 20:56:03 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:38.724 20:56:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:24:38.724 20:56:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.724 20:56:03 -- host/auth.sh@68 -- # digest=sha512 00:24:38.724 20:56:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:38.724 20:56:03 -- host/auth.sh@68 -- # keyid=0 00:24:38.724 20:56:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:38.724 20:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.724 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:24:38.724 20:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.724 20:56:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.724 20:56:03 -- nvmf/common.sh@717 -- # local ip 00:24:38.724 20:56:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.724 20:56:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.724 20:56:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.724 20:56:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.724 20:56:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.724 20:56:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.724 20:56:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.724 20:56:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.724 20:56:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.724 20:56:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:38.724 20:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.724 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:24:38.985 nvme0n1 00:24:38.985 20:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.985 20:56:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.985 20:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.985 20:56:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:38.985 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:24:38.985 20:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.985 20:56:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.985 20:56:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.985 20:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.985 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:24:38.985 20:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.985 20:56:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:38.985 20:56:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:38.985 20:56:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:38.985 20:56:03 -- host/auth.sh@44 -- # digest=sha512 00:24:38.985 20:56:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:38.985 20:56:03 -- host/auth.sh@44 -- # keyid=1 00:24:38.985 20:56:03 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:38.985 20:56:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:38.985 20:56:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:38.985 20:56:03 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:38.985 20:56:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:24:38.985 20:56:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:38.985 20:56:03 -- host/auth.sh@68 -- # digest=sha512 00:24:38.985 20:56:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:38.985 20:56:03 -- host/auth.sh@68 -- # keyid=1 00:24:38.985 20:56:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:38.985 20:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.985 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:24:38.985 20:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:38.985 20:56:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:38.985 20:56:03 -- nvmf/common.sh@717 -- # local ip 00:24:38.985 20:56:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:38.985 20:56:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:38.985 20:56:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.985 20:56:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.985 20:56:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:38.985 20:56:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.985 20:56:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:38.985 20:56:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:38.985 20:56:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:38.985 20:56:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:38.985 20:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:38.985 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:24:39.247 nvme0n1 00:24:39.247 20:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.247 20:56:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.247 20:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.247 20:56:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.247 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:24:39.247 20:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.509 20:56:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.509 20:56:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.509 20:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.509 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:24:39.509 20:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.509 20:56:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:39.509 20:56:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:39.509 20:56:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:39.509 20:56:03 -- host/auth.sh@44 -- # digest=sha512 00:24:39.509 20:56:03 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:39.509 20:56:03 -- host/auth.sh@44 -- # keyid=2 00:24:39.509 20:56:03 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:39.509 20:56:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:39.509 20:56:03 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:39.509 20:56:03 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:39.509 20:56:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:24:39.509 20:56:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:39.509 20:56:03 -- host/auth.sh@68 -- # digest=sha512 00:24:39.509 20:56:03 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:39.509 20:56:03 -- host/auth.sh@68 -- # keyid=2 00:24:39.509 20:56:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:39.509 20:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.509 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:24:39.509 20:56:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.509 20:56:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:39.509 20:56:03 -- nvmf/common.sh@717 -- # local ip 00:24:39.509 20:56:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:39.509 20:56:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:39.509 20:56:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.509 20:56:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.509 20:56:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:39.509 20:56:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.509 20:56:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:39.509 20:56:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:39.509 20:56:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:39.509 20:56:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:39.509 20:56:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.509 20:56:03 -- common/autotest_common.sh@10 -- # set +x 00:24:39.772 nvme0n1 00:24:39.772 20:56:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.772 20:56:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.772 20:56:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:39.772 20:56:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.772 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:24:39.772 20:56:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.772 20:56:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.772 20:56:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.772 20:56:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.772 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:24:39.772 20:56:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.772 20:56:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:39.772 20:56:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:39.772 20:56:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:39.772 20:56:04 -- host/auth.sh@44 -- # digest=sha512 00:24:39.772 20:56:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:39.772 20:56:04 -- host/auth.sh@44 -- # keyid=3 00:24:39.772 20:56:04 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:39.772 20:56:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:39.772 20:56:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:39.772 20:56:04 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:39.772 20:56:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:24:39.772 20:56:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:39.772 20:56:04 -- host/auth.sh@68 -- # digest=sha512 00:24:39.772 20:56:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:39.772 20:56:04 -- host/auth.sh@68 -- # keyid=3 00:24:39.772 20:56:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:39.772 20:56:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.772 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:24:39.772 20:56:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:39.772 20:56:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:39.772 20:56:04 -- nvmf/common.sh@717 -- # local ip 00:24:39.772 20:56:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:39.772 20:56:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:39.772 20:56:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.772 20:56:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.772 20:56:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:39.772 20:56:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.772 20:56:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:39.772 20:56:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:39.772 20:56:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:39.772 20:56:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:39.772 20:56:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:39.772 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:24:40.033 nvme0n1 00:24:40.034 20:56:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.034 20:56:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.034 20:56:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:40.034 20:56:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.034 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:24:40.034 20:56:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.034 20:56:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.034 20:56:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.034 20:56:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.034 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:24:40.034 20:56:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.034 20:56:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.034 20:56:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:40.034 20:56:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.034 20:56:04 -- host/auth.sh@44 -- # digest=sha512 00:24:40.034 20:56:04 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:40.034 20:56:04 -- host/auth.sh@44 -- # keyid=4 00:24:40.034 20:56:04 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:40.034 20:56:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:40.034 20:56:04 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:40.034 20:56:04 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:40.034 20:56:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:24:40.034 20:56:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.034 20:56:04 -- host/auth.sh@68 -- # digest=sha512 00:24:40.034 20:56:04 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:40.034 20:56:04 -- host/auth.sh@68 -- # keyid=4 00:24:40.034 20:56:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:40.034 20:56:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.034 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:24:40.034 20:56:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.034 20:56:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.034 20:56:04 -- nvmf/common.sh@717 -- # local ip 00:24:40.034 20:56:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.034 20:56:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.034 20:56:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.034 20:56:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.034 20:56:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:40.034 20:56:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.034 20:56:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:40.034 20:56:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:40.034 20:56:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:40.034 20:56:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.034 20:56:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.034 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:24:40.294 nvme0n1 00:24:40.294 20:56:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.294 20:56:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.294 20:56:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:40.294 20:56:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.294 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:24:40.555 20:56:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.555 20:56:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.555 20:56:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.555 20:56:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.555 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:24:40.555 20:56:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.555 20:56:04 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:40.555 20:56:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:40.555 20:56:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:40.555 20:56:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:40.555 20:56:04 -- host/auth.sh@44 -- # digest=sha512 00:24:40.555 20:56:04 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:40.555 20:56:04 -- host/auth.sh@44 -- # keyid=0 00:24:40.555 20:56:04 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:40.555 20:56:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:40.555 20:56:04 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:40.555 20:56:04 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:40.555 20:56:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:24:40.555 20:56:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:40.555 20:56:04 -- host/auth.sh@68 -- # digest=sha512 00:24:40.555 20:56:04 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:40.555 20:56:04 -- host/auth.sh@68 -- # keyid=0 00:24:40.555 20:56:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:40.555 20:56:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.555 20:56:04 -- common/autotest_common.sh@10 -- # set +x 00:24:40.555 20:56:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:40.555 20:56:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:40.555 20:56:05 -- nvmf/common.sh@717 -- # local ip 00:24:40.555 20:56:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:40.555 20:56:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:40.555 20:56:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.555 20:56:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.555 20:56:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:40.555 20:56:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.555 20:56:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:40.555 20:56:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:40.555 20:56:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:40.556 20:56:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:40.556 20:56:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:40.556 20:56:05 -- common/autotest_common.sh@10 -- # set +x 00:24:41.127 nvme0n1 00:24:41.127 20:56:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.127 20:56:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.127 20:56:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.127 20:56:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:41.127 20:56:05 -- common/autotest_common.sh@10 -- # set +x 00:24:41.127 20:56:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.127 20:56:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.127 20:56:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.127 20:56:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.127 20:56:05 -- common/autotest_common.sh@10 -- # set +x 00:24:41.127 20:56:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.127 20:56:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:41.127 20:56:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:41.127 20:56:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:41.127 20:56:05 -- host/auth.sh@44 -- # digest=sha512 00:24:41.127 20:56:05 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.127 20:56:05 -- host/auth.sh@44 -- # keyid=1 00:24:41.127 20:56:05 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:41.127 20:56:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:41.127 20:56:05 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:41.127 20:56:05 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:41.127 20:56:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:24:41.127 20:56:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:41.127 20:56:05 -- host/auth.sh@68 -- # digest=sha512 00:24:41.127 20:56:05 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:41.127 20:56:05 -- host/auth.sh@68 -- # keyid=1 00:24:41.127 20:56:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:41.127 20:56:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.127 20:56:05 -- common/autotest_common.sh@10 -- # set +x 00:24:41.127 20:56:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.127 20:56:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:41.127 20:56:05 -- nvmf/common.sh@717 -- # local ip 00:24:41.127 20:56:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:41.127 20:56:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:41.127 20:56:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.127 20:56:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.127 20:56:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:41.127 20:56:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.127 20:56:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:41.127 20:56:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:41.127 20:56:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:41.127 20:56:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:41.127 20:56:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.127 20:56:05 -- common/autotest_common.sh@10 -- # set +x 00:24:41.387 nvme0n1 00:24:41.387 20:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.387 20:56:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.387 20:56:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:41.387 20:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.387 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:24:41.647 20:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.647 20:56:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.647 20:56:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.647 20:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.647 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:24:41.647 20:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.647 20:56:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:41.647 20:56:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:41.647 20:56:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:41.647 20:56:06 -- host/auth.sh@44 -- # digest=sha512 00:24:41.647 20:56:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:41.647 20:56:06 -- host/auth.sh@44 -- # keyid=2 00:24:41.647 20:56:06 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:41.647 20:56:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:41.647 20:56:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:41.647 20:56:06 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:41.647 20:56:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:24:41.647 20:56:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:41.647 20:56:06 -- host/auth.sh@68 -- # digest=sha512 00:24:41.647 20:56:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:41.647 20:56:06 -- host/auth.sh@68 -- # keyid=2 00:24:41.647 20:56:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:41.647 20:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.647 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:24:41.647 20:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:41.647 20:56:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:41.647 20:56:06 -- nvmf/common.sh@717 -- # local ip 00:24:41.647 20:56:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:41.647 20:56:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:41.647 20:56:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.647 20:56:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.647 20:56:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:41.647 20:56:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.647 20:56:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:41.647 20:56:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:41.647 20:56:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:41.647 20:56:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:41.647 20:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:41.647 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:24:42.219 nvme0n1 00:24:42.219 20:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.219 20:56:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.219 20:56:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:42.219 20:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.219 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:24:42.219 20:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.219 20:56:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.219 20:56:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.219 20:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.219 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:24:42.219 20:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.219 20:56:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:42.219 20:56:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:42.219 20:56:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:42.219 20:56:06 -- host/auth.sh@44 -- # digest=sha512 00:24:42.219 20:56:06 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.219 20:56:06 -- host/auth.sh@44 -- # keyid=3 00:24:42.219 20:56:06 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:42.219 20:56:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:42.219 20:56:06 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:42.219 20:56:06 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:42.219 20:56:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:24:42.219 20:56:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:42.219 20:56:06 -- host/auth.sh@68 -- # digest=sha512 00:24:42.219 20:56:06 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:42.219 20:56:06 -- host/auth.sh@68 -- # keyid=3 00:24:42.219 20:56:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:42.219 20:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.219 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:24:42.219 20:56:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.219 20:56:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:42.219 20:56:06 -- nvmf/common.sh@717 -- # local ip 00:24:42.219 20:56:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:42.219 20:56:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:42.219 20:56:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.219 20:56:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.219 20:56:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:42.219 20:56:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.219 20:56:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:42.219 20:56:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:42.219 20:56:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:42.219 20:56:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:42.219 20:56:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.219 20:56:06 -- common/autotest_common.sh@10 -- # set +x 00:24:42.482 nvme0n1 00:24:42.482 20:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.482 20:56:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.482 20:56:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:42.482 20:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.482 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:24:42.482 20:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.742 20:56:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.742 20:56:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.742 20:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.742 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:24:42.742 20:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.742 20:56:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:42.742 20:56:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:42.742 20:56:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:42.742 20:56:07 -- host/auth.sh@44 -- # digest=sha512 00:24:42.742 20:56:07 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.742 20:56:07 -- host/auth.sh@44 -- # keyid=4 00:24:42.742 20:56:07 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:42.742 20:56:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:42.742 20:56:07 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:42.742 20:56:07 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:42.742 20:56:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:24:42.742 20:56:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:42.742 20:56:07 -- host/auth.sh@68 -- # digest=sha512 00:24:42.742 20:56:07 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:42.742 20:56:07 -- host/auth.sh@68 -- # keyid=4 00:24:42.742 20:56:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:42.742 20:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.742 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:24:42.742 20:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:42.742 20:56:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:42.742 20:56:07 -- nvmf/common.sh@717 -- # local ip 00:24:42.742 20:56:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:42.742 20:56:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:42.742 20:56:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.742 20:56:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.742 20:56:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:42.742 20:56:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.742 20:56:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:42.742 20:56:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:42.742 20:56:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:42.742 20:56:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:42.742 20:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:42.742 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:24:43.003 nvme0n1 00:24:43.003 20:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.264 20:56:07 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.264 20:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.264 20:56:07 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:43.264 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:24:43.264 20:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.264 20:56:07 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.264 20:56:07 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.264 20:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.264 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:24:43.264 20:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.264 20:56:07 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.264 20:56:07 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:43.264 20:56:07 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:43.264 20:56:07 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:43.264 20:56:07 -- host/auth.sh@44 -- # digest=sha512 00:24:43.264 20:56:07 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:43.264 20:56:07 -- host/auth.sh@44 -- # keyid=0 00:24:43.264 20:56:07 -- host/auth.sh@45 -- # key=DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:43.264 20:56:07 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:43.264 20:56:07 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:43.264 20:56:07 -- host/auth.sh@49 -- # echo DHHC-1:00:MjA1ZDY1MGVhNmI1ODMyMTBkNjgyNzIxYjczODY3YWLNzNge: 00:24:43.264 20:56:07 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:24:43.264 20:56:07 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:43.264 20:56:07 -- host/auth.sh@68 -- # digest=sha512 00:24:43.264 20:56:07 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:43.264 20:56:07 -- host/auth.sh@68 -- # keyid=0 00:24:43.264 20:56:07 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:43.264 20:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.264 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:24:43.264 20:56:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.264 20:56:07 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:43.264 20:56:07 -- nvmf/common.sh@717 -- # local ip 00:24:43.265 20:56:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:43.265 20:56:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:43.265 20:56:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.265 20:56:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.265 20:56:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:43.265 20:56:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.265 20:56:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:43.265 20:56:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:43.265 20:56:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:43.265 20:56:07 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:43.265 20:56:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.265 20:56:07 -- common/autotest_common.sh@10 -- # set +x 00:24:43.836 nvme0n1 00:24:43.836 20:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:43.836 20:56:08 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.836 20:56:08 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:43.836 20:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:43.836 20:56:08 -- common/autotest_common.sh@10 -- # set +x 00:24:44.097 20:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.097 20:56:08 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.097 20:56:08 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.097 20:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.097 20:56:08 -- common/autotest_common.sh@10 -- # set +x 00:24:44.097 20:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.097 20:56:08 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:44.097 20:56:08 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:44.097 20:56:08 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:44.097 20:56:08 -- host/auth.sh@44 -- # digest=sha512 00:24:44.097 20:56:08 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.097 20:56:08 -- host/auth.sh@44 -- # keyid=1 00:24:44.097 20:56:08 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:44.097 20:56:08 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:44.097 20:56:08 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:44.097 20:56:08 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:44.097 20:56:08 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:24:44.097 20:56:08 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:44.097 20:56:08 -- host/auth.sh@68 -- # digest=sha512 00:24:44.097 20:56:08 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:44.097 20:56:08 -- host/auth.sh@68 -- # keyid=1 00:24:44.097 20:56:08 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:44.097 20:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.097 20:56:08 -- common/autotest_common.sh@10 -- # set +x 00:24:44.097 20:56:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.097 20:56:08 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:44.097 20:56:08 -- nvmf/common.sh@717 -- # local ip 00:24:44.097 20:56:08 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:44.097 20:56:08 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:44.097 20:56:08 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.097 20:56:08 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.097 20:56:08 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:44.097 20:56:08 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.097 20:56:08 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:44.097 20:56:08 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:44.097 20:56:08 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:44.097 20:56:08 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:44.097 20:56:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.097 20:56:08 -- common/autotest_common.sh@10 -- # set +x 00:24:44.667 nvme0n1 00:24:44.667 20:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.667 20:56:09 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.667 20:56:09 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:44.667 20:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.667 20:56:09 -- common/autotest_common.sh@10 -- # set +x 00:24:44.667 20:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.667 20:56:09 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.667 20:56:09 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.667 20:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.667 20:56:09 -- common/autotest_common.sh@10 -- # set +x 00:24:44.667 20:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.667 20:56:09 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:44.667 20:56:09 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:44.667 20:56:09 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:44.667 20:56:09 -- host/auth.sh@44 -- # digest=sha512 00:24:44.667 20:56:09 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.667 20:56:09 -- host/auth.sh@44 -- # keyid=2 00:24:44.667 20:56:09 -- host/auth.sh@45 -- # key=DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:44.667 20:56:09 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:44.667 20:56:09 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:44.667 20:56:09 -- host/auth.sh@49 -- # echo DHHC-1:01:NDQxMGMyYTk3NzZjNjQ1NmI4NGQ5ZWVkZGM0YTgwNTMfWu9o: 00:24:44.667 20:56:09 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:24:44.667 20:56:09 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:44.667 20:56:09 -- host/auth.sh@68 -- # digest=sha512 00:24:44.667 20:56:09 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:44.667 20:56:09 -- host/auth.sh@68 -- # keyid=2 00:24:44.667 20:56:09 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:44.667 20:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.667 20:56:09 -- common/autotest_common.sh@10 -- # set +x 00:24:44.667 20:56:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:44.667 20:56:09 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:44.667 20:56:09 -- nvmf/common.sh@717 -- # local ip 00:24:44.667 20:56:09 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:44.667 20:56:09 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:44.667 20:56:09 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.667 20:56:09 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.667 20:56:09 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:44.667 20:56:09 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.667 20:56:09 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:44.667 20:56:09 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:44.667 20:56:09 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:44.668 20:56:09 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:44.668 20:56:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:44.668 20:56:09 -- common/autotest_common.sh@10 -- # set +x 00:24:45.609 nvme0n1 00:24:45.609 20:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.609 20:56:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.609 20:56:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:45.609 20:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.609 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:24:45.609 20:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.609 20:56:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.609 20:56:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.609 20:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.609 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:24:45.609 20:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.609 20:56:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:45.609 20:56:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:45.609 20:56:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:45.609 20:56:10 -- host/auth.sh@44 -- # digest=sha512 00:24:45.609 20:56:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.609 20:56:10 -- host/auth.sh@44 -- # keyid=3 00:24:45.609 20:56:10 -- host/auth.sh@45 -- # key=DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:45.609 20:56:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:45.609 20:56:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:45.609 20:56:10 -- host/auth.sh@49 -- # echo DHHC-1:02:NDEwMzRlYzkzYzgxOGQzZWI4NTg2YzVlNTExMWMyNzQ4NjVlMDM2NDhjZmZiOWQ0Zd7cqA==: 00:24:45.609 20:56:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:24:45.609 20:56:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:45.609 20:56:10 -- host/auth.sh@68 -- # digest=sha512 00:24:45.609 20:56:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:45.609 20:56:10 -- host/auth.sh@68 -- # keyid=3 00:24:45.609 20:56:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:45.609 20:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.609 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:24:45.609 20:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:45.609 20:56:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:45.609 20:56:10 -- nvmf/common.sh@717 -- # local ip 00:24:45.609 20:56:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:45.609 20:56:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:45.609 20:56:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.609 20:56:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.609 20:56:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:45.609 20:56:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.609 20:56:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:45.609 20:56:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:45.609 20:56:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:45.609 20:56:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:45.609 20:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:45.609 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:24:46.548 nvme0n1 00:24:46.548 20:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.548 20:56:10 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.548 20:56:10 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:46.548 20:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.548 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:24:46.548 20:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.548 20:56:10 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.548 20:56:10 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.548 20:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.548 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:24:46.548 20:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.548 20:56:10 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:46.548 20:56:10 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:46.548 20:56:10 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:46.548 20:56:10 -- host/auth.sh@44 -- # digest=sha512 00:24:46.548 20:56:10 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:46.548 20:56:10 -- host/auth.sh@44 -- # keyid=4 00:24:46.548 20:56:10 -- host/auth.sh@45 -- # key=DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:46.548 20:56:10 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:46.548 20:56:10 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:46.548 20:56:10 -- host/auth.sh@49 -- # echo DHHC-1:03:YzljMzc3NmE3MWM5MDE2ZmIyZmY3MTlhNzE4ZjIzZjNiMGViMWEyZTliYjQxMjUwZDYyMjFhYzhlZTRlMGVkYrENKPc=: 00:24:46.548 20:56:10 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:24:46.548 20:56:10 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:46.548 20:56:10 -- host/auth.sh@68 -- # digest=sha512 00:24:46.548 20:56:10 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:46.548 20:56:10 -- host/auth.sh@68 -- # keyid=4 00:24:46.548 20:56:10 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:46.548 20:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.548 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:24:46.548 20:56:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:46.548 20:56:10 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:46.548 20:56:10 -- nvmf/common.sh@717 -- # local ip 00:24:46.548 20:56:10 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:46.548 20:56:10 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:46.548 20:56:10 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.548 20:56:10 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.548 20:56:10 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:46.548 20:56:10 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.548 20:56:10 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:46.548 20:56:10 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:46.548 20:56:10 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:46.548 20:56:10 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:46.548 20:56:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:46.548 20:56:10 -- common/autotest_common.sh@10 -- # set +x 00:24:47.122 nvme0n1 00:24:47.122 20:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.122 20:56:11 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.122 20:56:11 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:47.122 20:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.122 20:56:11 -- common/autotest_common.sh@10 -- # set +x 00:24:47.122 20:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.122 20:56:11 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.122 20:56:11 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.123 20:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.123 20:56:11 -- common/autotest_common.sh@10 -- # set +x 00:24:47.123 20:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.123 20:56:11 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:47.123 20:56:11 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:47.123 20:56:11 -- host/auth.sh@44 -- # digest=sha256 00:24:47.123 20:56:11 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:47.123 20:56:11 -- host/auth.sh@44 -- # keyid=1 00:24:47.123 20:56:11 -- host/auth.sh@45 -- # key=DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:47.123 20:56:11 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:47.123 20:56:11 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:47.383 20:56:11 -- host/auth.sh@49 -- # echo DHHC-1:00:YTI1ZjBiNTAyNDFmZDIxMWM5YjE1MTUzZDU4OWE2NmNmMTg1M2IyZGM0ZWYzODgywUmgRA==: 00:24:47.383 20:56:11 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:47.383 20:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.383 20:56:11 -- common/autotest_common.sh@10 -- # set +x 00:24:47.383 20:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.383 20:56:11 -- host/auth.sh@119 -- # get_main_ns_ip 00:24:47.383 20:56:11 -- nvmf/common.sh@717 -- # local ip 00:24:47.383 20:56:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:47.383 20:56:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:47.383 20:56:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.383 20:56:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.383 20:56:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:47.383 20:56:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.383 20:56:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:47.384 20:56:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:47.384 20:56:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:47.384 20:56:11 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:47.384 20:56:11 -- common/autotest_common.sh@638 -- # local es=0 00:24:47.384 20:56:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:47.384 20:56:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:47.384 20:56:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:47.384 20:56:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:47.384 20:56:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:47.384 20:56:11 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:47.384 20:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.384 20:56:11 -- common/autotest_common.sh@10 -- # set +x 00:24:47.384 request: 00:24:47.384 { 00:24:47.384 "name": "nvme0", 00:24:47.384 "trtype": "tcp", 00:24:47.384 "traddr": "10.0.0.1", 00:24:47.384 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:47.384 "adrfam": "ipv4", 00:24:47.384 "trsvcid": "4420", 00:24:47.384 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:47.384 "method": "bdev_nvme_attach_controller", 00:24:47.384 "req_id": 1 00:24:47.384 } 00:24:47.384 Got JSON-RPC error response 00:24:47.384 response: 00:24:47.384 { 00:24:47.384 "code": -32602, 00:24:47.384 "message": "Invalid parameters" 00:24:47.384 } 00:24:47.384 20:56:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:47.384 20:56:11 -- common/autotest_common.sh@641 -- # es=1 00:24:47.384 20:56:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:47.384 20:56:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:47.384 20:56:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:47.384 20:56:11 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.384 20:56:11 -- host/auth.sh@121 -- # jq length 00:24:47.384 20:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.384 20:56:11 -- common/autotest_common.sh@10 -- # set +x 00:24:47.384 20:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.384 20:56:11 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:24:47.384 20:56:11 -- host/auth.sh@124 -- # get_main_ns_ip 00:24:47.384 20:56:11 -- nvmf/common.sh@717 -- # local ip 00:24:47.384 20:56:11 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:47.384 20:56:11 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:47.384 20:56:11 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.384 20:56:11 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.384 20:56:11 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:47.384 20:56:11 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.384 20:56:11 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:47.384 20:56:11 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:47.384 20:56:11 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:47.384 20:56:11 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:47.384 20:56:11 -- common/autotest_common.sh@638 -- # local es=0 00:24:47.384 20:56:11 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:47.384 20:56:11 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:47.384 20:56:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:47.384 20:56:11 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:47.384 20:56:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:47.384 20:56:11 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:47.384 20:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.384 20:56:11 -- common/autotest_common.sh@10 -- # set +x 00:24:47.384 request: 00:24:47.384 { 00:24:47.384 "name": "nvme0", 00:24:47.384 "trtype": "tcp", 00:24:47.384 "traddr": "10.0.0.1", 00:24:47.384 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:47.384 "adrfam": "ipv4", 00:24:47.384 "trsvcid": "4420", 00:24:47.384 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:47.384 "dhchap_key": "key2", 00:24:47.384 "method": "bdev_nvme_attach_controller", 00:24:47.384 "req_id": 1 00:24:47.384 } 00:24:47.384 Got JSON-RPC error response 00:24:47.384 response: 00:24:47.384 { 00:24:47.384 "code": -32602, 00:24:47.384 "message": "Invalid parameters" 00:24:47.384 } 00:24:47.384 20:56:11 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:47.384 20:56:11 -- common/autotest_common.sh@641 -- # es=1 00:24:47.384 20:56:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:47.384 20:56:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:47.384 20:56:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:47.384 20:56:11 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.384 20:56:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:47.384 20:56:11 -- host/auth.sh@127 -- # jq length 00:24:47.384 20:56:11 -- common/autotest_common.sh@10 -- # set +x 00:24:47.384 20:56:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:47.384 20:56:12 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:24:47.384 20:56:12 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:47.384 20:56:12 -- host/auth.sh@130 -- # cleanup 00:24:47.384 20:56:12 -- host/auth.sh@24 -- # nvmftestfini 00:24:47.384 20:56:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:47.384 20:56:12 -- nvmf/common.sh@117 -- # sync 00:24:47.384 20:56:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:47.384 20:56:12 -- nvmf/common.sh@120 -- # set +e 00:24:47.384 20:56:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:47.384 20:56:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:47.384 rmmod nvme_tcp 00:24:47.645 rmmod nvme_fabrics 00:24:47.645 20:56:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:47.645 20:56:12 -- nvmf/common.sh@124 -- # set -e 00:24:47.645 20:56:12 -- nvmf/common.sh@125 -- # return 0 00:24:47.645 20:56:12 -- nvmf/common.sh@478 -- # '[' -n 2900865 ']' 00:24:47.645 20:56:12 -- nvmf/common.sh@479 -- # killprocess 2900865 00:24:47.645 20:56:12 -- common/autotest_common.sh@936 -- # '[' -z 2900865 ']' 00:24:47.645 20:56:12 -- common/autotest_common.sh@940 -- # kill -0 2900865 00:24:47.645 20:56:12 -- common/autotest_common.sh@941 -- # uname 00:24:47.645 20:56:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:47.645 20:56:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2900865 00:24:47.645 20:56:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:47.645 20:56:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:47.645 20:56:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2900865' 00:24:47.645 killing process with pid 2900865 00:24:47.645 20:56:12 -- common/autotest_common.sh@955 -- # kill 2900865 00:24:47.645 20:56:12 -- common/autotest_common.sh@960 -- # wait 2900865 00:24:47.645 20:56:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:47.645 20:56:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:47.645 20:56:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:47.645 20:56:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:47.645 20:56:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:47.645 20:56:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.645 20:56:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.645 20:56:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.190 20:56:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:50.190 20:56:14 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:50.190 20:56:14 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:50.190 20:56:14 -- host/auth.sh@27 -- # clean_kernel_target 00:24:50.190 20:56:14 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:50.190 20:56:14 -- nvmf/common.sh@675 -- # echo 0 00:24:50.190 20:56:14 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:50.190 20:56:14 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:50.190 20:56:14 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:50.190 20:56:14 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:50.190 20:56:14 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:50.190 20:56:14 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:50.190 20:56:14 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:53.492 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:24:53.492 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:24:54.064 20:56:18 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5kg /tmp/spdk.key-null.R2n /tmp/spdk.key-sha256.CeY /tmp/spdk.key-sha384.RFS /tmp/spdk.key-sha512.zzU /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:54.064 20:56:18 -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:57.386 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:24:57.386 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:24:57.386 00:24:57.386 real 0m57.880s 00:24:57.386 user 0m51.328s 00:24:57.386 sys 0m15.127s 00:24:57.386 20:56:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:57.386 20:56:21 -- common/autotest_common.sh@10 -- # set +x 00:24:57.386 ************************************ 00:24:57.386 END TEST nvmf_auth 00:24:57.386 ************************************ 00:24:57.386 20:56:21 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:24:57.386 20:56:21 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:57.386 20:56:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:57.386 20:56:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:57.386 20:56:21 -- common/autotest_common.sh@10 -- # set +x 00:24:57.648 ************************************ 00:24:57.648 START TEST nvmf_digest 00:24:57.648 ************************************ 00:24:57.648 20:56:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:57.648 * Looking for test storage... 00:24:57.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.648 20:56:22 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.648 20:56:22 -- nvmf/common.sh@7 -- # uname -s 00:24:57.648 20:56:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.648 20:56:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.648 20:56:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.648 20:56:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.648 20:56:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.648 20:56:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.648 20:56:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.648 20:56:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.648 20:56:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.648 20:56:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.648 20:56:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:57.648 20:56:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:24:57.648 20:56:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.648 20:56:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.648 20:56:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.648 20:56:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.648 20:56:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.648 20:56:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.648 20:56:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.648 20:56:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.648 20:56:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.648 20:56:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.648 20:56:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.648 20:56:22 -- paths/export.sh@5 -- # export PATH 00:24:57.648 20:56:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.648 20:56:22 -- nvmf/common.sh@47 -- # : 0 00:24:57.648 20:56:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:57.648 20:56:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:57.648 20:56:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.648 20:56:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.648 20:56:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.648 20:56:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:57.648 20:56:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:57.648 20:56:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:57.648 20:56:22 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:57.648 20:56:22 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:57.648 20:56:22 -- host/digest.sh@16 -- # runtime=2 00:24:57.648 20:56:22 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:57.648 20:56:22 -- host/digest.sh@138 -- # nvmftestinit 00:24:57.648 20:56:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:57.648 20:56:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.648 20:56:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:57.648 20:56:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:57.648 20:56:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:57.648 20:56:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.648 20:56:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:57.648 20:56:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.648 20:56:22 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:24:57.648 20:56:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:57.648 20:56:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:57.648 20:56:22 -- common/autotest_common.sh@10 -- # set +x 00:25:05.867 20:56:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:05.867 20:56:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:05.867 20:56:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:05.867 20:56:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:05.867 20:56:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:05.867 20:56:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:05.867 20:56:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:05.867 20:56:29 -- nvmf/common.sh@295 -- # net_devs=() 00:25:05.867 20:56:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:05.867 20:56:29 -- nvmf/common.sh@296 -- # e810=() 00:25:05.867 20:56:29 -- nvmf/common.sh@296 -- # local -ga e810 00:25:05.867 20:56:29 -- nvmf/common.sh@297 -- # x722=() 00:25:05.867 20:56:29 -- nvmf/common.sh@297 -- # local -ga x722 00:25:05.867 20:56:29 -- nvmf/common.sh@298 -- # mlx=() 00:25:05.867 20:56:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:05.867 20:56:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.867 20:56:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.867 20:56:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.867 20:56:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.867 20:56:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.867 20:56:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.867 20:56:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.867 20:56:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.867 20:56:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.867 20:56:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.867 20:56:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.867 20:56:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:05.867 20:56:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:05.867 20:56:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:05.867 20:56:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.867 20:56:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:05.867 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:05.867 20:56:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.867 20:56:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:05.867 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:05.867 20:56:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:05.867 20:56:29 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.867 20:56:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.867 20:56:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:05.867 20:56:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.867 20:56:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:05.867 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:05.867 20:56:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.867 20:56:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.867 20:56:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.867 20:56:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:05.867 20:56:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.867 20:56:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:05.867 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:05.867 20:56:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.867 20:56:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:05.867 20:56:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:05.867 20:56:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:05.867 20:56:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:05.867 20:56:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.867 20:56:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.867 20:56:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.867 20:56:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:05.867 20:56:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.867 20:56:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.867 20:56:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:05.867 20:56:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.867 20:56:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.867 20:56:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:05.867 20:56:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:05.867 20:56:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.867 20:56:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.867 20:56:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.867 20:56:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.867 20:56:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:05.867 20:56:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.867 20:56:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.867 20:56:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.867 20:56:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:05.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:25:05.867 00:25:05.867 --- 10.0.0.2 ping statistics --- 00:25:05.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.867 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:25:05.867 20:56:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:25:05.867 00:25:05.867 --- 10.0.0.1 ping statistics --- 00:25:05.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.867 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:25:05.867 20:56:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.867 20:56:29 -- nvmf/common.sh@411 -- # return 0 00:25:05.868 20:56:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:05.868 20:56:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.868 20:56:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:05.868 20:56:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:05.868 20:56:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.868 20:56:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:05.868 20:56:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:05.868 20:56:29 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:05.868 20:56:29 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:05.868 20:56:29 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:05.868 20:56:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:05.868 20:56:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:05.868 20:56:29 -- common/autotest_common.sh@10 -- # set +x 00:25:05.868 ************************************ 00:25:05.868 START TEST nvmf_digest_clean 00:25:05.868 ************************************ 00:25:05.868 20:56:29 -- common/autotest_common.sh@1111 -- # run_digest 00:25:05.868 20:56:29 -- host/digest.sh@120 -- # local dsa_initiator 00:25:05.868 20:56:29 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:05.868 20:56:29 -- host/digest.sh@121 -- # dsa_initiator=false 00:25:05.868 20:56:29 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:05.868 20:56:29 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:05.868 20:56:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:05.868 20:56:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:05.868 20:56:29 -- common/autotest_common.sh@10 -- # set +x 00:25:05.868 20:56:29 -- nvmf/common.sh@470 -- # nvmfpid=2917313 00:25:05.868 20:56:29 -- nvmf/common.sh@471 -- # waitforlisten 2917313 00:25:05.868 20:56:29 -- common/autotest_common.sh@817 -- # '[' -z 2917313 ']' 00:25:05.868 20:56:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:05.868 20:56:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.868 20:56:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:05.868 20:56:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.868 20:56:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:05.868 20:56:29 -- common/autotest_common.sh@10 -- # set +x 00:25:05.868 [2024-04-24 20:56:29.640032] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:05.868 [2024-04-24 20:56:29.640087] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.868 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.868 [2024-04-24 20:56:29.724882] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.868 [2024-04-24 20:56:29.816778] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.868 [2024-04-24 20:56:29.816831] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.868 [2024-04-24 20:56:29.816839] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.868 [2024-04-24 20:56:29.816847] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.868 [2024-04-24 20:56:29.816853] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.868 [2024-04-24 20:56:29.816880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.129 20:56:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:06.129 20:56:30 -- common/autotest_common.sh@850 -- # return 0 00:25:06.129 20:56:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:06.129 20:56:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:06.129 20:56:30 -- common/autotest_common.sh@10 -- # set +x 00:25:06.129 20:56:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.129 20:56:30 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:06.129 20:56:30 -- host/digest.sh@126 -- # common_target_config 00:25:06.129 20:56:30 -- host/digest.sh@43 -- # rpc_cmd 00:25:06.129 20:56:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:06.129 20:56:30 -- common/autotest_common.sh@10 -- # set +x 00:25:06.129 null0 00:25:06.129 [2024-04-24 20:56:30.639078] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.129 [2024-04-24 20:56:30.663319] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.129 20:56:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:06.129 20:56:30 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:06.129 20:56:30 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:06.129 20:56:30 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:06.129 20:56:30 -- host/digest.sh@80 -- # rw=randread 00:25:06.129 20:56:30 -- host/digest.sh@80 -- # bs=4096 00:25:06.129 20:56:30 -- host/digest.sh@80 -- # qd=128 00:25:06.129 20:56:30 -- host/digest.sh@80 -- # scan_dsa=false 00:25:06.129 20:56:30 -- host/digest.sh@83 -- # bperfpid=2917491 00:25:06.129 20:56:30 -- host/digest.sh@84 -- # waitforlisten 2917491 /var/tmp/bperf.sock 00:25:06.129 20:56:30 -- common/autotest_common.sh@817 -- # '[' -z 2917491 ']' 00:25:06.129 20:56:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:06.129 20:56:30 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:06.129 20:56:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:06.129 20:56:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:06.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:06.129 20:56:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:06.129 20:56:30 -- common/autotest_common.sh@10 -- # set +x 00:25:06.129 [2024-04-24 20:56:30.719847] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:06.129 [2024-04-24 20:56:30.719908] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917491 ] 00:25:06.129 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.390 [2024-04-24 20:56:30.782302] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.390 [2024-04-24 20:56:30.854624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.390 20:56:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:06.390 20:56:30 -- common/autotest_common.sh@850 -- # return 0 00:25:06.391 20:56:30 -- host/digest.sh@86 -- # false 00:25:06.391 20:56:30 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:06.391 20:56:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:06.651 20:56:31 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.651 20:56:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.912 nvme0n1 00:25:06.912 20:56:31 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:06.912 20:56:31 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:06.912 Running I/O for 2 seconds... 00:25:09.455 00:25:09.455 Latency(us) 00:25:09.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.455 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:09.455 nvme0n1 : 2.01 19780.92 77.27 0.00 0.00 6461.80 3031.04 15400.96 00:25:09.455 =================================================================================================================== 00:25:09.455 Total : 19780.92 77.27 0.00 0.00 6461.80 3031.04 15400.96 00:25:09.455 0 00:25:09.455 20:56:33 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:09.455 20:56:33 -- host/digest.sh@93 -- # get_accel_stats 00:25:09.455 20:56:33 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:09.455 20:56:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:09.455 20:56:33 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:09.455 | select(.opcode=="crc32c") 00:25:09.455 | "\(.module_name) \(.executed)"' 00:25:09.455 20:56:33 -- host/digest.sh@94 -- # false 00:25:09.455 20:56:33 -- host/digest.sh@94 -- # exp_module=software 00:25:09.455 20:56:33 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:09.455 20:56:33 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:09.455 20:56:33 -- host/digest.sh@98 -- # killprocess 2917491 00:25:09.455 20:56:33 -- common/autotest_common.sh@936 -- # '[' -z 2917491 ']' 00:25:09.455 20:56:33 -- common/autotest_common.sh@940 -- # kill -0 2917491 00:25:09.455 20:56:33 -- common/autotest_common.sh@941 -- # uname 00:25:09.455 20:56:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:09.455 20:56:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2917491 00:25:09.455 20:56:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:09.455 20:56:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:09.455 20:56:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2917491' 00:25:09.455 killing process with pid 2917491 00:25:09.455 20:56:33 -- common/autotest_common.sh@955 -- # kill 2917491 00:25:09.455 Received shutdown signal, test time was about 2.000000 seconds 00:25:09.455 00:25:09.455 Latency(us) 00:25:09.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.455 =================================================================================================================== 00:25:09.455 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.455 20:56:33 -- common/autotest_common.sh@960 -- # wait 2917491 00:25:09.455 20:56:33 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:09.455 20:56:33 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:09.455 20:56:33 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:09.455 20:56:33 -- host/digest.sh@80 -- # rw=randread 00:25:09.455 20:56:33 -- host/digest.sh@80 -- # bs=131072 00:25:09.455 20:56:33 -- host/digest.sh@80 -- # qd=16 00:25:09.455 20:56:33 -- host/digest.sh@80 -- # scan_dsa=false 00:25:09.455 20:56:33 -- host/digest.sh@83 -- # bperfpid=2918166 00:25:09.455 20:56:33 -- host/digest.sh@84 -- # waitforlisten 2918166 /var/tmp/bperf.sock 00:25:09.455 20:56:33 -- common/autotest_common.sh@817 -- # '[' -z 2918166 ']' 00:25:09.455 20:56:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.455 20:56:33 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:09.455 20:56:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:09.455 20:56:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.455 20:56:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:09.455 20:56:33 -- common/autotest_common.sh@10 -- # set +x 00:25:09.455 [2024-04-24 20:56:34.025398] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:09.455 [2024-04-24 20:56:34.025456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918166 ] 00:25:09.455 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:09.455 Zero copy mechanism will not be used. 00:25:09.455 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.455 [2024-04-24 20:56:34.085663] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.715 [2024-04-24 20:56:34.147330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.715 20:56:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:09.715 20:56:34 -- common/autotest_common.sh@850 -- # return 0 00:25:09.715 20:56:34 -- host/digest.sh@86 -- # false 00:25:09.715 20:56:34 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:09.715 20:56:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:09.975 20:56:34 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:09.975 20:56:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.236 nvme0n1 00:25:10.236 20:56:34 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:10.236 20:56:34 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:10.236 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:10.236 Zero copy mechanism will not be used. 00:25:10.236 Running I/O for 2 seconds... 00:25:12.780 00:25:12.780 Latency(us) 00:25:12.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.780 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:12.780 nvme0n1 : 2.00 3628.95 453.62 0.00 0.00 4405.76 928.43 10649.60 00:25:12.780 =================================================================================================================== 00:25:12.780 Total : 3628.95 453.62 0.00 0.00 4405.76 928.43 10649.60 00:25:12.780 0 00:25:12.780 20:56:36 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:12.780 20:56:36 -- host/digest.sh@93 -- # get_accel_stats 00:25:12.780 20:56:36 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:12.780 20:56:36 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:12.780 | select(.opcode=="crc32c") 00:25:12.780 | "\(.module_name) \(.executed)"' 00:25:12.780 20:56:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:12.780 20:56:37 -- host/digest.sh@94 -- # false 00:25:12.780 20:56:37 -- host/digest.sh@94 -- # exp_module=software 00:25:12.780 20:56:37 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:12.780 20:56:37 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:12.780 20:56:37 -- host/digest.sh@98 -- # killprocess 2918166 00:25:12.780 20:56:37 -- common/autotest_common.sh@936 -- # '[' -z 2918166 ']' 00:25:12.780 20:56:37 -- common/autotest_common.sh@940 -- # kill -0 2918166 00:25:12.780 20:56:37 -- common/autotest_common.sh@941 -- # uname 00:25:12.780 20:56:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:12.780 20:56:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2918166 00:25:12.780 20:56:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:12.780 20:56:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:12.780 20:56:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2918166' 00:25:12.780 killing process with pid 2918166 00:25:12.780 20:56:37 -- common/autotest_common.sh@955 -- # kill 2918166 00:25:12.780 Received shutdown signal, test time was about 2.000000 seconds 00:25:12.780 00:25:12.780 Latency(us) 00:25:12.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.781 =================================================================================================================== 00:25:12.781 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:12.781 20:56:37 -- common/autotest_common.sh@960 -- # wait 2918166 00:25:12.781 20:56:37 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:12.781 20:56:37 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:12.781 20:56:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:12.781 20:56:37 -- host/digest.sh@80 -- # rw=randwrite 00:25:12.781 20:56:37 -- host/digest.sh@80 -- # bs=4096 00:25:12.781 20:56:37 -- host/digest.sh@80 -- # qd=128 00:25:12.781 20:56:37 -- host/digest.sh@80 -- # scan_dsa=false 00:25:12.781 20:56:37 -- host/digest.sh@83 -- # bperfpid=2918847 00:25:12.781 20:56:37 -- host/digest.sh@84 -- # waitforlisten 2918847 /var/tmp/bperf.sock 00:25:12.781 20:56:37 -- common/autotest_common.sh@817 -- # '[' -z 2918847 ']' 00:25:12.781 20:56:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:12.781 20:56:37 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:12.781 20:56:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:12.781 20:56:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:12.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:12.781 20:56:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:12.781 20:56:37 -- common/autotest_common.sh@10 -- # set +x 00:25:12.781 [2024-04-24 20:56:37.267667] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:12.781 [2024-04-24 20:56:37.267722] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2918847 ] 00:25:12.781 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.781 [2024-04-24 20:56:37.326394] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.781 [2024-04-24 20:56:37.387903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.041 20:56:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:13.041 20:56:37 -- common/autotest_common.sh@850 -- # return 0 00:25:13.041 20:56:37 -- host/digest.sh@86 -- # false 00:25:13.041 20:56:37 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:13.041 20:56:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:13.041 20:56:37 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.041 20:56:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:13.613 nvme0n1 00:25:13.613 20:56:37 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:13.613 20:56:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:13.613 Running I/O for 2 seconds... 00:25:15.528 00:25:15.528 Latency(us) 00:25:15.528 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.528 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:15.528 nvme0n1 : 2.01 21087.88 82.37 0.00 0.00 6057.04 2949.12 10594.99 00:25:15.528 =================================================================================================================== 00:25:15.528 Total : 21087.88 82.37 0.00 0.00 6057.04 2949.12 10594.99 00:25:15.528 0 00:25:15.528 20:56:40 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:15.528 20:56:40 -- host/digest.sh@93 -- # get_accel_stats 00:25:15.528 20:56:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:15.529 20:56:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:15.529 20:56:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:15.529 | select(.opcode=="crc32c") 00:25:15.529 | "\(.module_name) \(.executed)"' 00:25:15.789 20:56:40 -- host/digest.sh@94 -- # false 00:25:15.789 20:56:40 -- host/digest.sh@94 -- # exp_module=software 00:25:15.789 20:56:40 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:15.789 20:56:40 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:15.789 20:56:40 -- host/digest.sh@98 -- # killprocess 2918847 00:25:15.789 20:56:40 -- common/autotest_common.sh@936 -- # '[' -z 2918847 ']' 00:25:15.789 20:56:40 -- common/autotest_common.sh@940 -- # kill -0 2918847 00:25:15.789 20:56:40 -- common/autotest_common.sh@941 -- # uname 00:25:15.789 20:56:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:15.789 20:56:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2918847 00:25:15.789 20:56:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:15.789 20:56:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:15.789 20:56:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2918847' 00:25:15.789 killing process with pid 2918847 00:25:15.789 20:56:40 -- common/autotest_common.sh@955 -- # kill 2918847 00:25:15.789 Received shutdown signal, test time was about 2.000000 seconds 00:25:15.789 00:25:15.789 Latency(us) 00:25:15.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.789 =================================================================================================================== 00:25:15.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:15.789 20:56:40 -- common/autotest_common.sh@960 -- # wait 2918847 00:25:16.050 20:56:40 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:16.050 20:56:40 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:16.050 20:56:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:16.050 20:56:40 -- host/digest.sh@80 -- # rw=randwrite 00:25:16.050 20:56:40 -- host/digest.sh@80 -- # bs=131072 00:25:16.050 20:56:40 -- host/digest.sh@80 -- # qd=16 00:25:16.050 20:56:40 -- host/digest.sh@80 -- # scan_dsa=false 00:25:16.050 20:56:40 -- host/digest.sh@83 -- # bperfpid=2919524 00:25:16.050 20:56:40 -- host/digest.sh@84 -- # waitforlisten 2919524 /var/tmp/bperf.sock 00:25:16.050 20:56:40 -- common/autotest_common.sh@817 -- # '[' -z 2919524 ']' 00:25:16.050 20:56:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:16.050 20:56:40 -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:16.050 20:56:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:16.050 20:56:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:16.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:16.050 20:56:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:16.050 20:56:40 -- common/autotest_common.sh@10 -- # set +x 00:25:16.050 [2024-04-24 20:56:40.560335] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:16.050 [2024-04-24 20:56:40.560405] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919524 ] 00:25:16.050 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:16.050 Zero copy mechanism will not be used. 00:25:16.050 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.050 [2024-04-24 20:56:40.617594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.050 [2024-04-24 20:56:40.678993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.311 20:56:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:16.311 20:56:40 -- common/autotest_common.sh@850 -- # return 0 00:25:16.311 20:56:40 -- host/digest.sh@86 -- # false 00:25:16.311 20:56:40 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:16.311 20:56:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:16.572 20:56:40 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:16.572 20:56:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:16.832 nvme0n1 00:25:16.832 20:56:41 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:16.832 20:56:41 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:16.832 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:16.832 Zero copy mechanism will not be used. 00:25:16.832 Running I/O for 2 seconds... 00:25:18.749 00:25:18.749 Latency(us) 00:25:18.749 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.749 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:18.749 nvme0n1 : 2.00 4207.52 525.94 0.00 0.00 3795.84 1897.81 11359.57 00:25:18.749 =================================================================================================================== 00:25:18.749 Total : 4207.52 525.94 0.00 0.00 3795.84 1897.81 11359.57 00:25:18.749 0 00:25:18.749 20:56:43 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:18.749 20:56:43 -- host/digest.sh@93 -- # get_accel_stats 00:25:18.749 20:56:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:18.749 20:56:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:18.749 | select(.opcode=="crc32c") 00:25:18.749 | "\(.module_name) \(.executed)"' 00:25:18.749 20:56:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:19.016 20:56:43 -- host/digest.sh@94 -- # false 00:25:19.016 20:56:43 -- host/digest.sh@94 -- # exp_module=software 00:25:19.016 20:56:43 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:19.016 20:56:43 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:19.016 20:56:43 -- host/digest.sh@98 -- # killprocess 2919524 00:25:19.016 20:56:43 -- common/autotest_common.sh@936 -- # '[' -z 2919524 ']' 00:25:19.016 20:56:43 -- common/autotest_common.sh@940 -- # kill -0 2919524 00:25:19.016 20:56:43 -- common/autotest_common.sh@941 -- # uname 00:25:19.016 20:56:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:19.016 20:56:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2919524 00:25:19.016 20:56:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:19.016 20:56:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:19.016 20:56:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2919524' 00:25:19.016 killing process with pid 2919524 00:25:19.016 20:56:43 -- common/autotest_common.sh@955 -- # kill 2919524 00:25:19.016 Received shutdown signal, test time was about 2.000000 seconds 00:25:19.016 00:25:19.016 Latency(us) 00:25:19.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:19.016 =================================================================================================================== 00:25:19.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:19.016 20:56:43 -- common/autotest_common.sh@960 -- # wait 2919524 00:25:19.276 20:56:43 -- host/digest.sh@132 -- # killprocess 2917313 00:25:19.276 20:56:43 -- common/autotest_common.sh@936 -- # '[' -z 2917313 ']' 00:25:19.276 20:56:43 -- common/autotest_common.sh@940 -- # kill -0 2917313 00:25:19.276 20:56:43 -- common/autotest_common.sh@941 -- # uname 00:25:19.276 20:56:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:19.276 20:56:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2917313 00:25:19.276 20:56:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:19.276 20:56:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:19.276 20:56:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2917313' 00:25:19.276 killing process with pid 2917313 00:25:19.276 20:56:43 -- common/autotest_common.sh@955 -- # kill 2917313 00:25:19.276 20:56:43 -- common/autotest_common.sh@960 -- # wait 2917313 00:25:19.537 00:25:19.537 real 0m14.389s 00:25:19.537 user 0m28.476s 00:25:19.537 sys 0m3.305s 00:25:19.537 20:56:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:19.537 20:56:43 -- common/autotest_common.sh@10 -- # set +x 00:25:19.537 ************************************ 00:25:19.537 END TEST nvmf_digest_clean 00:25:19.537 ************************************ 00:25:19.537 20:56:44 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:19.537 20:56:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:19.537 20:56:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:19.537 20:56:44 -- common/autotest_common.sh@10 -- # set +x 00:25:19.537 ************************************ 00:25:19.537 START TEST nvmf_digest_error 00:25:19.537 ************************************ 00:25:19.537 20:56:44 -- common/autotest_common.sh@1111 -- # run_digest_error 00:25:19.537 20:56:44 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:19.537 20:56:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:19.537 20:56:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:19.537 20:56:44 -- common/autotest_common.sh@10 -- # set +x 00:25:19.537 20:56:44 -- nvmf/common.sh@470 -- # nvmfpid=2920232 00:25:19.537 20:56:44 -- nvmf/common.sh@471 -- # waitforlisten 2920232 00:25:19.537 20:56:44 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:19.537 20:56:44 -- common/autotest_common.sh@817 -- # '[' -z 2920232 ']' 00:25:19.537 20:56:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.537 20:56:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:19.537 20:56:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.537 20:56:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:19.537 20:56:44 -- common/autotest_common.sh@10 -- # set +x 00:25:19.798 [2024-04-24 20:56:44.199169] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:19.798 [2024-04-24 20:56:44.199220] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.798 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.798 [2024-04-24 20:56:44.282591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.798 [2024-04-24 20:56:44.350283] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.798 [2024-04-24 20:56:44.350320] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.798 [2024-04-24 20:56:44.350327] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:19.798 [2024-04-24 20:56:44.350333] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:19.798 [2024-04-24 20:56:44.350338] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.798 [2024-04-24 20:56:44.350358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.739 20:56:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:20.739 20:56:45 -- common/autotest_common.sh@850 -- # return 0 00:25:20.739 20:56:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:20.739 20:56:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:20.739 20:56:45 -- common/autotest_common.sh@10 -- # set +x 00:25:20.739 20:56:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.739 20:56:45 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:20.739 20:56:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.739 20:56:45 -- common/autotest_common.sh@10 -- # set +x 00:25:20.739 [2024-04-24 20:56:45.124536] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:20.739 20:56:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.739 20:56:45 -- host/digest.sh@105 -- # common_target_config 00:25:20.739 20:56:45 -- host/digest.sh@43 -- # rpc_cmd 00:25:20.739 20:56:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:20.739 20:56:45 -- common/autotest_common.sh@10 -- # set +x 00:25:20.739 null0 00:25:20.739 [2024-04-24 20:56:45.201272] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.739 [2024-04-24 20:56:45.225474] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.739 20:56:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:20.739 20:56:45 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:20.739 20:56:45 -- host/digest.sh@54 -- # local rw bs qd 00:25:20.739 20:56:45 -- host/digest.sh@56 -- # rw=randread 00:25:20.739 20:56:45 -- host/digest.sh@56 -- # bs=4096 00:25:20.739 20:56:45 -- host/digest.sh@56 -- # qd=128 00:25:20.739 20:56:45 -- host/digest.sh@58 -- # bperfpid=2920366 00:25:20.739 20:56:45 -- host/digest.sh@60 -- # waitforlisten 2920366 /var/tmp/bperf.sock 00:25:20.739 20:56:45 -- common/autotest_common.sh@817 -- # '[' -z 2920366 ']' 00:25:20.739 20:56:45 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:20.739 20:56:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:20.739 20:56:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:20.739 20:56:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:20.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:20.739 20:56:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:20.739 20:56:45 -- common/autotest_common.sh@10 -- # set +x 00:25:20.739 [2024-04-24 20:56:45.286678] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:20.739 [2024-04-24 20:56:45.286778] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2920366 ] 00:25:20.739 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.739 [2024-04-24 20:56:45.346482] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.000 [2024-04-24 20:56:45.408422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.000 20:56:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:21.000 20:56:45 -- common/autotest_common.sh@850 -- # return 0 00:25:21.000 20:56:45 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:21.000 20:56:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:21.261 20:56:45 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:21.261 20:56:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.261 20:56:45 -- common/autotest_common.sh@10 -- # set +x 00:25:21.261 20:56:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.261 20:56:45 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.261 20:56:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:21.522 nvme0n1 00:25:21.522 20:56:46 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:21.522 20:56:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:21.522 20:56:46 -- common/autotest_common.sh@10 -- # set +x 00:25:21.522 20:56:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:21.522 20:56:46 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:21.522 20:56:46 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:21.781 Running I/O for 2 seconds... 00:25:21.781 [2024-04-24 20:56:46.252478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.781 [2024-04-24 20:56:46.252515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.781 [2024-04-24 20:56:46.252527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.781 [2024-04-24 20:56:46.267314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.781 [2024-04-24 20:56:46.267338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.781 [2024-04-24 20:56:46.267347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.781 [2024-04-24 20:56:46.281615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.781 [2024-04-24 20:56:46.281638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.781 [2024-04-24 20:56:46.281647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.781 [2024-04-24 20:56:46.294999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.781 [2024-04-24 20:56:46.295023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.781 [2024-04-24 20:56:46.295033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.781 [2024-04-24 20:56:46.306088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.781 [2024-04-24 20:56:46.306109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.781 [2024-04-24 20:56:46.306117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.781 [2024-04-24 20:56:46.321135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.781 [2024-04-24 20:56:46.321157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.781 [2024-04-24 20:56:46.321167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.781 [2024-04-24 20:56:46.335455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.782 [2024-04-24 20:56:46.335477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.782 [2024-04-24 20:56:46.335485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.782 [2024-04-24 20:56:46.347322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.782 [2024-04-24 20:56:46.347344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.782 [2024-04-24 20:56:46.347352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.782 [2024-04-24 20:56:46.361644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.782 [2024-04-24 20:56:46.361665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.782 [2024-04-24 20:56:46.361674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.782 [2024-04-24 20:56:46.371986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.782 [2024-04-24 20:56:46.372007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.782 [2024-04-24 20:56:46.372015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.782 [2024-04-24 20:56:46.386372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.782 [2024-04-24 20:56:46.386393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.782 [2024-04-24 20:56:46.386406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.782 [2024-04-24 20:56:46.401328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.782 [2024-04-24 20:56:46.401349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:4380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.782 [2024-04-24 20:56:46.401358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:21.782 [2024-04-24 20:56:46.412274] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:21.782 [2024-04-24 20:56:46.412295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.782 [2024-04-24 20:56:46.412303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.427526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.427548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.427558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.439600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.439621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.439630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.454623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.454645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.454653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.466136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.466157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.466166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.479963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.479984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.479992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.493935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.493956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.493964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.504457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.504482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.504490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.517981] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.518003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.518011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.531978] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.531999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.532007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.544395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.544416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.544425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.556560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.556581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.556590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.569917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.569938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.569947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.581847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.581867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.581875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.594891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.594912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.594921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.605898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.605919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.605927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.619162] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.619183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.619192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.632320] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.632340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.632349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.642932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.642953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.642961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.656270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.656290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.656298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.041 [2024-04-24 20:56:46.671840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.041 [2024-04-24 20:56:46.671861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.041 [2024-04-24 20:56:46.671870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.301 [2024-04-24 20:56:46.683553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.301 [2024-04-24 20:56:46.683575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.301 [2024-04-24 20:56:46.683583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.301 [2024-04-24 20:56:46.696457] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.301 [2024-04-24 20:56:46.696478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.301 [2024-04-24 20:56:46.696487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.301 [2024-04-24 20:56:46.709221] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.301 [2024-04-24 20:56:46.709242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.301 [2024-04-24 20:56:46.709251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.301 [2024-04-24 20:56:46.722286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.301 [2024-04-24 20:56:46.722307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.301 [2024-04-24 20:56:46.722319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.301 [2024-04-24 20:56:46.734432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.301 [2024-04-24 20:56:46.734453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.301 [2024-04-24 20:56:46.734462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.301 [2024-04-24 20:56:46.746532] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.301 [2024-04-24 20:56:46.746553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.301 [2024-04-24 20:56:46.746561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.301 [2024-04-24 20:56:46.758627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.301 [2024-04-24 20:56:46.758648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.301 [2024-04-24 20:56:46.758657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.301 [2024-04-24 20:56:46.774251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.301 [2024-04-24 20:56:46.774273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.301 [2024-04-24 20:56:46.774282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.301 [2024-04-24 20:56:46.789092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.302 [2024-04-24 20:56:46.789114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.302 [2024-04-24 20:56:46.789123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.302 [2024-04-24 20:56:46.800592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.302 [2024-04-24 20:56:46.800613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.302 [2024-04-24 20:56:46.800622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.302 [2024-04-24 20:56:46.813941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.302 [2024-04-24 20:56:46.813962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.302 [2024-04-24 20:56:46.813971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.302 [2024-04-24 20:56:46.827913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.302 [2024-04-24 20:56:46.827933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.302 [2024-04-24 20:56:46.827942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.302 [2024-04-24 20:56:46.839668] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.302 [2024-04-24 20:56:46.839689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.302 [2024-04-24 20:56:46.839698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.302 [2024-04-24 20:56:46.855465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.302 [2024-04-24 20:56:46.855487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.302 [2024-04-24 20:56:46.855496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.302 [2024-04-24 20:56:46.869485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.302 [2024-04-24 20:56:46.869506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.302 [2024-04-24 20:56:46.869515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.302 [2024-04-24 20:56:46.882143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.302 [2024-04-24 20:56:46.882164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.302 [2024-04-24 20:56:46.882173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.302 [2024-04-24 20:56:46.893533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.302 [2024-04-24 20:56:46.893554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.302 [2024-04-24 20:56:46.893562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.302 [2024-04-24 20:56:46.908040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.302 [2024-04-24 20:56:46.908061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.302 [2024-04-24 20:56:46.908069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.302 [2024-04-24 20:56:46.923712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.302 [2024-04-24 20:56:46.923737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.302 [2024-04-24 20:56:46.923746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.302 [2024-04-24 20:56:46.938387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.302 [2024-04-24 20:56:46.938408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.302 [2024-04-24 20:56:46.938417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:46.951678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:46.951699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:46.951711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:46.963445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:46.963466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:46.963475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:46.975357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:46.975378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:46.975387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:46.988394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:46.988415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:46.988424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.001238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.001259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.001269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.012874] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.012896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.012905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.026416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.026437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.026445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.038305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.038330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.038339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.051093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.051115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.051124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.064937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.064961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.064970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.076338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.076359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.076367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.090132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.090153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.090161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.101742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.101763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.101771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.115313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.115335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.115344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.127548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.127569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.127578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.139807] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.139828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.139837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.155060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.155081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.155090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.167854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.167875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.167884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.179216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.179237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.179246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.562 [2024-04-24 20:56:47.193170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.562 [2024-04-24 20:56:47.193190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.562 [2024-04-24 20:56:47.193199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.822 [2024-04-24 20:56:47.206588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.822 [2024-04-24 20:56:47.206609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.822 [2024-04-24 20:56:47.206618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.822 [2024-04-24 20:56:47.217713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.822 [2024-04-24 20:56:47.217739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.822 [2024-04-24 20:56:47.217748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.822 [2024-04-24 20:56:47.231071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.822 [2024-04-24 20:56:47.231092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.822 [2024-04-24 20:56:47.231100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.822 [2024-04-24 20:56:47.244356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.822 [2024-04-24 20:56:47.244377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.822 [2024-04-24 20:56:47.244386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.822 [2024-04-24 20:56:47.256570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.822 [2024-04-24 20:56:47.256592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.822 [2024-04-24 20:56:47.256601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.822 [2024-04-24 20:56:47.268777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.822 [2024-04-24 20:56:47.268799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.822 [2024-04-24 20:56:47.268807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.822 [2024-04-24 20:56:47.280867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.822 [2024-04-24 20:56:47.280888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.822 [2024-04-24 20:56:47.280902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.822 [2024-04-24 20:56:47.293958] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.822 [2024-04-24 20:56:47.293980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.822 [2024-04-24 20:56:47.293988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.822 [2024-04-24 20:56:47.308287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.822 [2024-04-24 20:56:47.308309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.822 [2024-04-24 20:56:47.308318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.823 [2024-04-24 20:56:47.321819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.823 [2024-04-24 20:56:47.321840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.823 [2024-04-24 20:56:47.321849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.823 [2024-04-24 20:56:47.333773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.823 [2024-04-24 20:56:47.333794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.823 [2024-04-24 20:56:47.333803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.823 [2024-04-24 20:56:47.349363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.823 [2024-04-24 20:56:47.349385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.823 [2024-04-24 20:56:47.349393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.823 [2024-04-24 20:56:47.363453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.823 [2024-04-24 20:56:47.363474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.823 [2024-04-24 20:56:47.363482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.823 [2024-04-24 20:56:47.375931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.823 [2024-04-24 20:56:47.375952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.823 [2024-04-24 20:56:47.375961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.823 [2024-04-24 20:56:47.386765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.823 [2024-04-24 20:56:47.386786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.823 [2024-04-24 20:56:47.386795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.823 [2024-04-24 20:56:47.400769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.823 [2024-04-24 20:56:47.400794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.823 [2024-04-24 20:56:47.400802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.823 [2024-04-24 20:56:47.414571] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.823 [2024-04-24 20:56:47.414593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.823 [2024-04-24 20:56:47.414601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.823 [2024-04-24 20:56:47.426492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.823 [2024-04-24 20:56:47.426513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.823 [2024-04-24 20:56:47.426522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.823 [2024-04-24 20:56:47.438004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.823 [2024-04-24 20:56:47.438025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.823 [2024-04-24 20:56:47.438033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:22.823 [2024-04-24 20:56:47.451200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:22.823 [2024-04-24 20:56:47.451221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:22.823 [2024-04-24 20:56:47.451230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.463211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.463233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.463241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.478413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.478433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.478442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.491176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.491197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.491205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.503374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.503395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.503404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.516592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.516613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.516622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.527566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.527587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.527595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.541193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.541214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.541223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.554702] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.554731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.554740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.568674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.568695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.568704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.580657] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.580678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.580687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.596645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.596666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.596675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.611685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.611706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.611715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.622936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.622957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.622969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.639286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.639308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.639317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.655674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.655694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.655703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.668810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.668830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.668839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.680406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.680427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.680436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.695260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.695281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.695289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.084 [2024-04-24 20:56:47.710632] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.084 [2024-04-24 20:56:47.710654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.084 [2024-04-24 20:56:47.710663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.345 [2024-04-24 20:56:47.725026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.345 [2024-04-24 20:56:47.725047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.345 [2024-04-24 20:56:47.725055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.345 [2024-04-24 20:56:47.736366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.345 [2024-04-24 20:56:47.736386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.345 [2024-04-24 20:56:47.736395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.345 [2024-04-24 20:56:47.751199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.345 [2024-04-24 20:56:47.751221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.345 [2024-04-24 20:56:47.751229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.345 [2024-04-24 20:56:47.761687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.345 [2024-04-24 20:56:47.761708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.345 [2024-04-24 20:56:47.761716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.345 [2024-04-24 20:56:47.775029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.345 [2024-04-24 20:56:47.775050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.345 [2024-04-24 20:56:47.775059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.345 [2024-04-24 20:56:47.791719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.345 [2024-04-24 20:56:47.791750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.345 [2024-04-24 20:56:47.791759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.345 [2024-04-24 20:56:47.802016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.345 [2024-04-24 20:56:47.802036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.345 [2024-04-24 20:56:47.802045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.345 [2024-04-24 20:56:47.817532] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.345 [2024-04-24 20:56:47.817553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.345 [2024-04-24 20:56:47.817562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.345 [2024-04-24 20:56:47.830533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.345 [2024-04-24 20:56:47.830554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.345 [2024-04-24 20:56:47.830563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.345 [2024-04-24 20:56:47.843123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.345 [2024-04-24 20:56:47.843143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.345 [2024-04-24 20:56:47.843152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.345 [2024-04-24 20:56:47.856384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.345 [2024-04-24 20:56:47.856405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.345 [2024-04-24 20:56:47.856417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.345 [2024-04-24 20:56:47.867157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.346 [2024-04-24 20:56:47.867179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.346 [2024-04-24 20:56:47.867188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.346 [2024-04-24 20:56:47.880225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.346 [2024-04-24 20:56:47.880246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.346 [2024-04-24 20:56:47.880255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.346 [2024-04-24 20:56:47.893938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.346 [2024-04-24 20:56:47.893959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.346 [2024-04-24 20:56:47.893968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.346 [2024-04-24 20:56:47.909667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.346 [2024-04-24 20:56:47.909689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.346 [2024-04-24 20:56:47.909698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.346 [2024-04-24 20:56:47.921027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.346 [2024-04-24 20:56:47.921047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.346 [2024-04-24 20:56:47.921056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.346 [2024-04-24 20:56:47.933427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.346 [2024-04-24 20:56:47.933449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.346 [2024-04-24 20:56:47.933457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.346 [2024-04-24 20:56:47.945778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.346 [2024-04-24 20:56:47.945799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.346 [2024-04-24 20:56:47.945807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.346 [2024-04-24 20:56:47.960325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.346 [2024-04-24 20:56:47.960346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.346 [2024-04-24 20:56:47.960355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.346 [2024-04-24 20:56:47.971642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.346 [2024-04-24 20:56:47.971666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.346 [2024-04-24 20:56:47.971675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:47.986203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:47.986224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:47.986232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:47.998354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:47.998374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:47.998383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.012065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.012086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.012095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.025190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.025212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.025220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.035797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.035819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.035828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.050231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.050253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.050262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.065076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.065096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.065105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.077860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.077880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.077888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.088596] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.088617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.088625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.102818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.102839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.102848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.117374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.117395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.117403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.129815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.129836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.129845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.143555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.143576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.143584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.156920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.156941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.156949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.170494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.170516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.170524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.183063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.183083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.183092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.195146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.195166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.195178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.207889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.207910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.207919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.219163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.219184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.219193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.606 [2024-04-24 20:56:48.232502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe1d730) 00:25:23.606 [2024-04-24 20:56:48.232523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.606 [2024-04-24 20:56:48.232532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.866 00:25:23.866 Latency(us) 00:25:23.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:23.866 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:23.866 nvme0n1 : 2.04 19058.42 74.45 0.00 0.00 6580.07 3126.61 43909.12 00:25:23.866 =================================================================================================================== 00:25:23.866 Total : 19058.42 74.45 0.00 0.00 6580.07 3126.61 43909.12 00:25:23.866 0 00:25:23.866 20:56:48 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:23.866 20:56:48 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:23.866 20:56:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:23.866 20:56:48 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:23.866 | .driver_specific 00:25:23.866 | .nvme_error 00:25:23.866 | .status_code 00:25:23.866 | .command_transient_transport_error' 00:25:24.126 20:56:48 -- host/digest.sh@71 -- # (( 152 > 0 )) 00:25:24.126 20:56:48 -- host/digest.sh@73 -- # killprocess 2920366 00:25:24.126 20:56:48 -- common/autotest_common.sh@936 -- # '[' -z 2920366 ']' 00:25:24.126 20:56:48 -- common/autotest_common.sh@940 -- # kill -0 2920366 00:25:24.126 20:56:48 -- common/autotest_common.sh@941 -- # uname 00:25:24.126 20:56:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:24.126 20:56:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2920366 00:25:24.126 20:56:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:24.126 20:56:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:24.126 20:56:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2920366' 00:25:24.126 killing process with pid 2920366 00:25:24.126 20:56:48 -- common/autotest_common.sh@955 -- # kill 2920366 00:25:24.126 Received shutdown signal, test time was about 2.000000 seconds 00:25:24.126 00:25:24.126 Latency(us) 00:25:24.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.126 =================================================================================================================== 00:25:24.126 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.126 20:56:48 -- common/autotest_common.sh@960 -- # wait 2920366 00:25:24.126 20:56:48 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:24.126 20:56:48 -- host/digest.sh@54 -- # local rw bs qd 00:25:24.126 20:56:48 -- host/digest.sh@56 -- # rw=randread 00:25:24.126 20:56:48 -- host/digest.sh@56 -- # bs=131072 00:25:24.126 20:56:48 -- host/digest.sh@56 -- # qd=16 00:25:24.126 20:56:48 -- host/digest.sh@58 -- # bperfpid=2921029 00:25:24.126 20:56:48 -- host/digest.sh@60 -- # waitforlisten 2921029 /var/tmp/bperf.sock 00:25:24.126 20:56:48 -- common/autotest_common.sh@817 -- # '[' -z 2921029 ']' 00:25:24.126 20:56:48 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:24.126 20:56:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:24.127 20:56:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:24.127 20:56:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:24.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:24.127 20:56:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:24.127 20:56:48 -- common/autotest_common.sh@10 -- # set +x 00:25:24.127 [2024-04-24 20:56:48.752885] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:24.127 [2024-04-24 20:56:48.752958] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921029 ] 00:25:24.127 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:24.127 Zero copy mechanism will not be used. 00:25:24.387 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.387 [2024-04-24 20:56:48.811228] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.387 [2024-04-24 20:56:48.873117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.387 20:56:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:24.387 20:56:48 -- common/autotest_common.sh@850 -- # return 0 00:25:24.387 20:56:48 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:24.387 20:56:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:24.707 20:56:49 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:24.707 20:56:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.707 20:56:49 -- common/autotest_common.sh@10 -- # set +x 00:25:24.707 20:56:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.707 20:56:49 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:24.708 20:56:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:24.980 nvme0n1 00:25:24.980 20:56:49 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:24.980 20:56:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.980 20:56:49 -- common/autotest_common.sh@10 -- # set +x 00:25:24.980 20:56:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.980 20:56:49 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:24.980 20:56:49 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:24.980 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:24.980 Zero copy mechanism will not be used. 00:25:24.980 Running I/O for 2 seconds... 00:25:24.980 [2024-04-24 20:56:49.608267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:24.980 [2024-04-24 20:56:49.608304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.980 [2024-04-24 20:56:49.608316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:24.980 [2024-04-24 20:56:49.617184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:24.980 [2024-04-24 20:56:49.617213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.980 [2024-04-24 20:56:49.617223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.627914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.627938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.627947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.638809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.638831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.638840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.649064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.649086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.649095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.659697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.659719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.659733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.668856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.668877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.668886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.678575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.678596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.678605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.688028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.688049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.688058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.698998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.699019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.699028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.708076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.708097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.708105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.719386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.719407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.719416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.729591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.729612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.729621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.737001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.737022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.737031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.745920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.745941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.745950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.755178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.755199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.755207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.763796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.763817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.763826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.772250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.772271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.772280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.781358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.781380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.781392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.791424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.791446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.791454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.801247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.801269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.801278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.811175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.811196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.811204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.820666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.820687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.820695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.827771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.827792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.827801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.838019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.838039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.838048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.843699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.843720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.843733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.850741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.850762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.850771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.240 [2024-04-24 20:56:49.859436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.240 [2024-04-24 20:56:49.859456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.240 [2024-04-24 20:56:49.859465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.241 [2024-04-24 20:56:49.868884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.241 [2024-04-24 20:56:49.868905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.241 [2024-04-24 20:56:49.868914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.241 [2024-04-24 20:56:49.877485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.241 [2024-04-24 20:56:49.877507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.241 [2024-04-24 20:56:49.877516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.887585] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.887606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.887616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.897288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.897309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.897318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.903982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.904003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.904012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.911376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.911397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.911405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.919491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.919511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.919520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.928400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.928421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.928434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.938251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.938271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.938280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.944907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.944928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.944937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.951376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.951397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.951405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.961339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.961360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.961369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.969542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.969563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.969572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.980181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.980202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.980210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.989089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.989111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.989119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:49.999412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:49.999433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:49.999442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:50.008616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:50.008642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:50.008651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:50.015970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:50.015992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:50.016001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:50.023366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:50.023387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:50.023396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:50.032410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:50.032431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:50.032440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.501 [2024-04-24 20:56:50.040252] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.501 [2024-04-24 20:56:50.040273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.501 [2024-04-24 20:56:50.040282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.502 [2024-04-24 20:56:50.046423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.502 [2024-04-24 20:56:50.046444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.502 [2024-04-24 20:56:50.046453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.502 [2024-04-24 20:56:50.052429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.502 [2024-04-24 20:56:50.052450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.502 [2024-04-24 20:56:50.052459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.502 [2024-04-24 20:56:50.059685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.502 [2024-04-24 20:56:50.059707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.502 [2024-04-24 20:56:50.059716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.502 [2024-04-24 20:56:50.066536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.502 [2024-04-24 20:56:50.066558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.502 [2024-04-24 20:56:50.066567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.502 [2024-04-24 20:56:50.072022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.502 [2024-04-24 20:56:50.072044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.502 [2024-04-24 20:56:50.072053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.502 [2024-04-24 20:56:50.078311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.502 [2024-04-24 20:56:50.078333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.502 [2024-04-24 20:56:50.078342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.502 [2024-04-24 20:56:50.084314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.502 [2024-04-24 20:56:50.084335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.502 [2024-04-24 20:56:50.084344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.502 [2024-04-24 20:56:50.090940] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.502 [2024-04-24 20:56:50.090962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.502 [2024-04-24 20:56:50.090970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.502 [2024-04-24 20:56:50.099723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.502 [2024-04-24 20:56:50.099751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.502 [2024-04-24 20:56:50.099760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.502 [2024-04-24 20:56:50.112452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.502 [2024-04-24 20:56:50.112475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.502 [2024-04-24 20:56:50.112484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.502 [2024-04-24 20:56:50.124545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.502 [2024-04-24 20:56:50.124567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.502 [2024-04-24 20:56:50.124576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.502 [2024-04-24 20:56:50.137488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.502 [2024-04-24 20:56:50.137511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.502 [2024-04-24 20:56:50.137520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.149368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.149390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.149405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.160123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.160145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.160154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.170431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.170453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.170462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.181264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.181286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.181295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.190068] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.190090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.190099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.199913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.199935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.199944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.208825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.208847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.208855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.217015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.217038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.217047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.227406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.227428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.227437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.238442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.238464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.238473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.251676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.251698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.251707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.262860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.262881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.262890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.270345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.270368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.270377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.280833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.761 [2024-04-24 20:56:50.280855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.761 [2024-04-24 20:56:50.280864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.761 [2024-04-24 20:56:50.290501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.290523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.290532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.762 [2024-04-24 20:56:50.300195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.300218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.300227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.762 [2024-04-24 20:56:50.308893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.308916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.308924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.762 [2024-04-24 20:56:50.317164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.317187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.317199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.762 [2024-04-24 20:56:50.325267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.325289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.325298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.762 [2024-04-24 20:56:50.335270] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.335292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.335300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.762 [2024-04-24 20:56:50.345220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.345242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.345250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.762 [2024-04-24 20:56:50.355417] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.355439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.355447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.762 [2024-04-24 20:56:50.364951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.364973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.364982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.762 [2024-04-24 20:56:50.373524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.373546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.373555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:25.762 [2024-04-24 20:56:50.382613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.382635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.382644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:25.762 [2024-04-24 20:56:50.392247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.392269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.392278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:25.762 [2024-04-24 20:56:50.401204] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:25.762 [2024-04-24 20:56:50.401230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.762 [2024-04-24 20:56:50.401239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.409964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.409987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.409995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.422642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.422664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.422673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.432839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.432861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.432869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.444432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.444454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.444463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.452059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.452080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.452089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.461552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.461574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.461583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.471171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.471193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.471201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.480870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.480892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.480901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.490121] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.490143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.490152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.498701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.498724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.498738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.508796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.508819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.508828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.515932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.515954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.515963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.526620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.526642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.526651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.537351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.537374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.537383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.547129] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.547151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.547160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.555816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.555838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.555847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.566287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.566310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.566322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.576759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.576781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.576789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.586977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.587000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.587009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.596905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.596926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.596935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.607201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.607223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.607232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.618305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.618327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.618336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.626406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.626429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.626438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.633490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.059 [2024-04-24 20:56:50.633512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.059 [2024-04-24 20:56:50.633521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.059 [2024-04-24 20:56:50.640872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.060 [2024-04-24 20:56:50.640894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.060 [2024-04-24 20:56:50.640903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.060 [2024-04-24 20:56:50.649448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.060 [2024-04-24 20:56:50.649540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.060 [2024-04-24 20:56:50.649548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.060 [2024-04-24 20:56:50.658661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.060 [2024-04-24 20:56:50.658682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.060 [2024-04-24 20:56:50.658691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.060 [2024-04-24 20:56:50.668752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.060 [2024-04-24 20:56:50.668774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.060 [2024-04-24 20:56:50.668782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.060 [2024-04-24 20:56:50.678835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.060 [2024-04-24 20:56:50.678857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.060 [2024-04-24 20:56:50.678866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.060 [2024-04-24 20:56:50.688534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.060 [2024-04-24 20:56:50.688556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.060 [2024-04-24 20:56:50.688565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.322 [2024-04-24 20:56:50.699128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.322 [2024-04-24 20:56:50.699151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.322 [2024-04-24 20:56:50.699160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.322 [2024-04-24 20:56:50.707893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.322 [2024-04-24 20:56:50.707915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.322 [2024-04-24 20:56:50.707924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.322 [2024-04-24 20:56:50.717211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.322 [2024-04-24 20:56:50.717232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.322 [2024-04-24 20:56:50.717241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.322 [2024-04-24 20:56:50.727735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.322 [2024-04-24 20:56:50.727757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.322 [2024-04-24 20:56:50.727766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.322 [2024-04-24 20:56:50.737565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.322 [2024-04-24 20:56:50.737588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.322 [2024-04-24 20:56:50.737596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.322 [2024-04-24 20:56:50.744614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.322 [2024-04-24 20:56:50.744637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.322 [2024-04-24 20:56:50.744645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.322 [2024-04-24 20:56:50.753110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.322 [2024-04-24 20:56:50.753133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.322 [2024-04-24 20:56:50.753142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.322 [2024-04-24 20:56:50.761927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.322 [2024-04-24 20:56:50.761949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.322 [2024-04-24 20:56:50.761958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.322 [2024-04-24 20:56:50.772367] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.322 [2024-04-24 20:56:50.772389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.322 [2024-04-24 20:56:50.772398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.322 [2024-04-24 20:56:50.778058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.778079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.778088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.786211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.786236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.786246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.796537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.796560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.796569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.806208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.806230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.806243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.815441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.815463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.815472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.824988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.825010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.825020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.834036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.834058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.834067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.846631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.846653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.846662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.855251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.855273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.855282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.865045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.865067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.865076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.873781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.873803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.873812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.880612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.880634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.880643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.890533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.890555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.890565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.900847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.900869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.900878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.909441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.909464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.909473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.918786] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.918807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.918816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.927542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.927563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.927572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.937113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.937135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.323 [2024-04-24 20:56:50.937143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.323 [2024-04-24 20:56:50.944443] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.323 [2024-04-24 20:56:50.944465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-04-24 20:56:50.944474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.324 [2024-04-24 20:56:50.952395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.324 [2024-04-24 20:56:50.952416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-04-24 20:56:50.952425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.324 [2024-04-24 20:56:50.959825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.324 [2024-04-24 20:56:50.959847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.324 [2024-04-24 20:56:50.959860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:50.969796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:50.969819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:50.969828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:50.979835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:50.979857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:50.979866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:50.989758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:50.989780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:50.989788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.000568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.000590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.000598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.010569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.010591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.010599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.020931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.020952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.020961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.029833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.029856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.029864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.039210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.039232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.039240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.049019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.049044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.049053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.059235] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.059257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.059266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.067636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.067658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.067666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.075357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.075380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.075389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.085526] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.085548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.085557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.095847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.095869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.095878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.107251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.107273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.107282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.117105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.117127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.117136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.126971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.126993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.127003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.137928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.137950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.137959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.148168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.148190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.148199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.156900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.156922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.156931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.167269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.167291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.167300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.175459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.584 [2024-04-24 20:56:51.175481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.584 [2024-04-24 20:56:51.175490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.584 [2024-04-24 20:56:51.184372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.585 [2024-04-24 20:56:51.184393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-04-24 20:56:51.184402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.585 [2024-04-24 20:56:51.194710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.585 [2024-04-24 20:56:51.194736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-04-24 20:56:51.194745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.585 [2024-04-24 20:56:51.203356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.585 [2024-04-24 20:56:51.203378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-04-24 20:56:51.203387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.585 [2024-04-24 20:56:51.213574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.585 [2024-04-24 20:56:51.213596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-04-24 20:56:51.213608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.585 [2024-04-24 20:56:51.222437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.585 [2024-04-24 20:56:51.222460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.585 [2024-04-24 20:56:51.222468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.232013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.232036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.232044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.241160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.241183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.241191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.249853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.249875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.249884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.259181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.259203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.259211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.267625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.267647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.267656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.277538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.277560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.277568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.286264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.286286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.286295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.296840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.296867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.296875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.304810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.304832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.304841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.314848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.314869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.314878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.322555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.322576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.322585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.329793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.329814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.329823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.341460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.845 [2024-04-24 20:56:51.341482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.845 [2024-04-24 20:56:51.341491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.845 [2024-04-24 20:56:51.351997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.352019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.352028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.364228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.364250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.364258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.375659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.375682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.375690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.382324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.382345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.382354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.391490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.391512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.391520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.401493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.401515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.401523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.410187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.410209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.410218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.420653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.420675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.420683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.429148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.429170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.429179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.438327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.438350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.438358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.448046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.448069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.448077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.457722] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.457759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.457768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.468309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.468332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.468340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:26.846 [2024-04-24 20:56:51.476472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:26.846 [2024-04-24 20:56:51.476494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.846 [2024-04-24 20:56:51.476503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.485530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.485553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.485561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.494980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.495003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.495011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.505542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.505564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.505573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.515925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.515947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.515956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.526481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.526502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.526511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.535994] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.536016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.536025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.545345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.545368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.545376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.555181] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.555203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.555211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.563575] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.563597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.563605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.572673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.572695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.572704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.581315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.581337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.581346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.590120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.590142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.590151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:27.107 [2024-04-24 20:56:51.598637] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c2c820) 00:25:27.107 [2024-04-24 20:56:51.598659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.107 [2024-04-24 20:56:51.598667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:27.107 00:25:27.107 Latency(us) 00:25:27.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.107 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:27.107 nvme0n1 : 2.00 3341.36 417.67 0.00 0.00 4785.41 942.08 13216.43 00:25:27.107 =================================================================================================================== 00:25:27.107 Total : 3341.36 417.67 0.00 0.00 4785.41 942.08 13216.43 00:25:27.107 0 00:25:27.107 20:56:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:27.107 20:56:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:27.107 20:56:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:27.107 | .driver_specific 00:25:27.107 | .nvme_error 00:25:27.107 | .status_code 00:25:27.107 | .command_transient_transport_error' 00:25:27.107 20:56:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:27.366 20:56:51 -- host/digest.sh@71 -- # (( 215 > 0 )) 00:25:27.366 20:56:51 -- host/digest.sh@73 -- # killprocess 2921029 00:25:27.366 20:56:51 -- common/autotest_common.sh@936 -- # '[' -z 2921029 ']' 00:25:27.366 20:56:51 -- common/autotest_common.sh@940 -- # kill -0 2921029 00:25:27.366 20:56:51 -- common/autotest_common.sh@941 -- # uname 00:25:27.366 20:56:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:27.366 20:56:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2921029 00:25:27.366 20:56:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:27.366 20:56:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:27.366 20:56:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2921029' 00:25:27.366 killing process with pid 2921029 00:25:27.366 20:56:51 -- common/autotest_common.sh@955 -- # kill 2921029 00:25:27.367 Received shutdown signal, test time was about 2.000000 seconds 00:25:27.367 00:25:27.367 Latency(us) 00:25:27.367 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.367 =================================================================================================================== 00:25:27.367 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.367 20:56:51 -- common/autotest_common.sh@960 -- # wait 2921029 00:25:27.627 20:56:52 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:27.627 20:56:52 -- host/digest.sh@54 -- # local rw bs qd 00:25:27.627 20:56:52 -- host/digest.sh@56 -- # rw=randwrite 00:25:27.627 20:56:52 -- host/digest.sh@56 -- # bs=4096 00:25:27.627 20:56:52 -- host/digest.sh@56 -- # qd=128 00:25:27.627 20:56:52 -- host/digest.sh@58 -- # bperfpid=2921638 00:25:27.627 20:56:52 -- host/digest.sh@60 -- # waitforlisten 2921638 /var/tmp/bperf.sock 00:25:27.627 20:56:52 -- common/autotest_common.sh@817 -- # '[' -z 2921638 ']' 00:25:27.627 20:56:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:27.627 20:56:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:27.627 20:56:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:27.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:27.627 20:56:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:27.627 20:56:52 -- common/autotest_common.sh@10 -- # set +x 00:25:27.627 20:56:52 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:27.627 [2024-04-24 20:56:52.079451] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:27.627 [2024-04-24 20:56:52.079511] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2921638 ] 00:25:27.627 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.627 [2024-04-24 20:56:52.138205] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.627 [2024-04-24 20:56:52.199973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.889 20:56:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:27.889 20:56:52 -- common/autotest_common.sh@850 -- # return 0 00:25:27.889 20:56:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:27.889 20:56:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:27.889 20:56:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:27.889 20:56:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:27.889 20:56:52 -- common/autotest_common.sh@10 -- # set +x 00:25:27.889 20:56:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:27.889 20:56:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:27.889 20:56:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:28.148 nvme0n1 00:25:28.148 20:56:52 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:28.148 20:56:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:28.148 20:56:52 -- common/autotest_common.sh@10 -- # set +x 00:25:28.148 20:56:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:28.148 20:56:52 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:28.148 20:56:52 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:28.408 Running I/O for 2 seconds... 00:25:28.408 [2024-04-24 20:56:52.885319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190ee190 00:25:28.408 [2024-04-24 20:56:52.886421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.408 [2024-04-24 20:56:52.886456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:28.408 [2024-04-24 20:56:52.897764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e9168 00:25:28.409 [2024-04-24 20:56:52.898843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:52.898866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:28.409 [2024-04-24 20:56:52.910062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190f0ff8 00:25:28.409 [2024-04-24 20:56:52.910990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:52.911012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:28.409 [2024-04-24 20:56:52.922273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190ef6a8 00:25:28.409 [2024-04-24 20:56:52.923510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:52.923530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:28.409 [2024-04-24 20:56:52.933595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190f7da8 00:25:28.409 [2024-04-24 20:56:52.934808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:52.934828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:28.409 [2024-04-24 20:56:52.945865] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190f9b30 00:25:28.409 [2024-04-24 20:56:52.947085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:52.947106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:28.409 [2024-04-24 20:56:52.956942] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190fac10 00:25:28.409 [2024-04-24 20:56:52.957789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:52.957809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:28.409 [2024-04-24 20:56:52.970597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e7818 00:25:28.409 [2024-04-24 20:56:52.971698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:52.971718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:28.409 [2024-04-24 20:56:52.984332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190fbcf0 00:25:28.409 [2024-04-24 20:56:52.986254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:52.986274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:28.409 [2024-04-24 20:56:52.994442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190f6020 00:25:28.409 [2024-04-24 20:56:52.995341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:52.995361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:28.409 [2024-04-24 20:56:53.008014] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190df118 00:25:28.409 [2024-04-24 20:56:53.009758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:53.009778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:28.409 [2024-04-24 20:56:53.018264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190f0350 00:25:28.409 [2024-04-24 20:56:53.019339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:53.019358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:28.409 [2024-04-24 20:56:53.030503] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190dfdc0 00:25:28.409 [2024-04-24 20:56:53.031387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:53.031407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:28.409 [2024-04-24 20:56:53.044306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190f5378 00:25:28.409 [2024-04-24 20:56:53.046195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:17729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.409 [2024-04-24 20:56:53.046215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:28.669 [2024-04-24 20:56:53.056549] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e12d8 00:25:28.669 [2024-04-24 20:56:53.058413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.669 [2024-04-24 20:56:53.058432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.066658] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e99d8 00:25:28.670 [2024-04-24 20:56:53.067569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.067593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.080431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e0a68 00:25:28.670 [2024-04-24 20:56:53.082170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.082189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.091694] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190f2948 00:25:28.670 [2024-04-24 20:56:53.093253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.093273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.103524] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190f81e0 00:25:28.670 [2024-04-24 20:56:53.104580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.104600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.115773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190edd58 00:25:28.670 [2024-04-24 20:56:53.117167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.117187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.127389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.127656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.127676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.139761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.139931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.139950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.152143] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.152447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.152466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.164509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.164798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.164817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.176880] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.177185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.177204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.189243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.189546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.189566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.201697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.201990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.202009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.214085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.214388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.214407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.226440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.226747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.226766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.238805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.239080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.239099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.251159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.251470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.251490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.263527] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.263815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.263834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.275898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.276202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.276222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.288269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.288575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.288601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.670 [2024-04-24 20:56:53.300641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.670 [2024-04-24 20:56:53.300938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.670 [2024-04-24 20:56:53.300959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.313027] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.313333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.313353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.325401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.325670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.325690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.337805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.338109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.338129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.350183] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.350354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.350373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.362553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.362864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.362884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.374944] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.375244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.375263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.387302] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.387607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.387630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.399676] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.399982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.400002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.412076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.412377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.412396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.424460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.424629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:2797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.424647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.436845] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.437136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.437155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.449220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.449528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.449547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.461585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.461856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.461875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.473948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.474251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.474271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.486351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.486653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.486672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.498704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.499007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.499027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.511120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.511423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.511443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.523470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.523739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.523758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.535843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.536116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:8494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.536135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.548187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.548360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:23763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.548379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.931 [2024-04-24 20:56:53.560595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:28.931 [2024-04-24 20:56:53.560882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.931 [2024-04-24 20:56:53.560901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.572970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.573239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:22958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.573258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.585417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.585722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.585745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.597785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.598161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.598180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.610154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.610417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.610437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.622515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.622828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.622848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.634878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.635183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.635202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.647230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.647498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.647516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.659572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.659836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.659854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.671940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.672206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.672226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.684304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.684580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.684599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.696679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.696975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.696995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.709059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.709363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.709385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.721421] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.721728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.721748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.733795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.734096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.734115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.746176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.746487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.746506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.192 [2024-04-24 20:56:53.758544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.192 [2024-04-24 20:56:53.758833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.192 [2024-04-24 20:56:53.758854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.193 [2024-04-24 20:56:53.770891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.193 [2024-04-24 20:56:53.771162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.193 [2024-04-24 20:56:53.771182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.193 [2024-04-24 20:56:53.783282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.193 [2024-04-24 20:56:53.783592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.193 [2024-04-24 20:56:53.783612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.193 [2024-04-24 20:56:53.795684] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.193 [2024-04-24 20:56:53.795996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.193 [2024-04-24 20:56:53.796015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.193 [2024-04-24 20:56:53.808072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.193 [2024-04-24 20:56:53.808340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.193 [2024-04-24 20:56:53.808359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.193 [2024-04-24 20:56:53.820441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.193 [2024-04-24 20:56:53.820715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.193 [2024-04-24 20:56:53.820739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.461 [2024-04-24 20:56:53.832825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.461 [2024-04-24 20:56:53.833095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.461 [2024-04-24 20:56:53.833115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.461 [2024-04-24 20:56:53.845208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.461 [2024-04-24 20:56:53.845513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.461 [2024-04-24 20:56:53.845532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.461 [2024-04-24 20:56:53.857582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.461 [2024-04-24 20:56:53.857892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.461 [2024-04-24 20:56:53.857911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.461 [2024-04-24 20:56:53.869954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.461 [2024-04-24 20:56:53.870223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.461 [2024-04-24 20:56:53.870242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.461 [2024-04-24 20:56:53.882333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:53.882622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:53.882641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:53.894698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:53.894979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:53.894997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:53.907086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:53.907387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:53.907405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:53.919443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:53.919747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:53.919768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:53.931810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:53.932104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:53.932124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:53.944189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:53.944494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:53.944513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:53.956568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:53.956860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:53.956880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:53.968952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:53.969267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:53.969286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:53.981327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:53.981615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:53.981634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:53.993703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:53.994014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:53.994034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:54.006072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:54.006364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:54.006384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:54.018485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:54.018767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:54.018787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:54.030843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:54.031143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:54.031166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:54.043247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:54.043418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:25508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:54.043437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:54.055598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:54.055897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:54.055917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:54.067994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:54.068264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:54.068284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:54.080546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:54.080831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:54.080851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.462 [2024-04-24 20:56:54.092950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.462 [2024-04-24 20:56:54.093256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.462 [2024-04-24 20:56:54.093276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.105328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.105603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.105622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.117723] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.118009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.118028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.130091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.130362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.130381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.142456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.142748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.142767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.154833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.155102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.155121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.167203] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.167468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:5772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.167488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.179576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.179843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.179862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.191947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.192248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.192267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.204309] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.204574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.204593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.216756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.217042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.217062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.229125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.229393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.229412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.241492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.241801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.241821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.253861] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.254157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.254177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.266222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.266519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.266538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.278588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.278860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.278880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.290961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.291256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.291276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.303327] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.303632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.303651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.315697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.760 [2024-04-24 20:56:54.316022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.760 [2024-04-24 20:56:54.316042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.760 [2024-04-24 20:56:54.328069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.761 [2024-04-24 20:56:54.328361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.761 [2024-04-24 20:56:54.328381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.761 [2024-04-24 20:56:54.340434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.761 [2024-04-24 20:56:54.340733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.761 [2024-04-24 20:56:54.340753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.761 [2024-04-24 20:56:54.352799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.761 [2024-04-24 20:56:54.353106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.761 [2024-04-24 20:56:54.353127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.761 [2024-04-24 20:56:54.365157] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.761 [2024-04-24 20:56:54.365466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.761 [2024-04-24 20:56:54.365485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.761 [2024-04-24 20:56:54.377523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.761 [2024-04-24 20:56:54.377812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.761 [2024-04-24 20:56:54.377831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:29.761 [2024-04-24 20:56:54.389886] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:29.761 [2024-04-24 20:56:54.390160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:29.761 [2024-04-24 20:56:54.390179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.022 [2024-04-24 20:56:54.402265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.022 [2024-04-24 20:56:54.402535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.022 [2024-04-24 20:56:54.402554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.022 [2024-04-24 20:56:54.414644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.022 [2024-04-24 20:56:54.414927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.022 [2024-04-24 20:56:54.414947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.022 [2024-04-24 20:56:54.427024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.022 [2024-04-24 20:56:54.427299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.022 [2024-04-24 20:56:54.427318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.022 [2024-04-24 20:56:54.439394] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.022 [2024-04-24 20:56:54.439661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.022 [2024-04-24 20:56:54.439679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.022 [2024-04-24 20:56:54.451768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.022 [2024-04-24 20:56:54.452058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.022 [2024-04-24 20:56:54.452077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.022 [2024-04-24 20:56:54.464127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.022 [2024-04-24 20:56:54.464394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.022 [2024-04-24 20:56:54.464413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.022 [2024-04-24 20:56:54.476490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.022 [2024-04-24 20:56:54.476800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.476820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.488850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.489158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.489177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.501226] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.501499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.501518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.513596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.513875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.513894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.525958] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.526262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.526281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.538330] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.538597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.538622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.550703] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.550991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.551012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.563053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.563349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.563369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.575442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.575746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.575765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.587791] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.588064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:10229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.588084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.600182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.600482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.600502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.612637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.612962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.612981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.625029] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.625299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.625318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.637368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.637664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.637683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.649759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.650070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.650089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.023 [2024-04-24 20:56:54.662099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.023 [2024-04-24 20:56:54.662368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:8778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.023 [2024-04-24 20:56:54.662387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.284 [2024-04-24 20:56:54.674529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.284 [2024-04-24 20:56:54.674799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.284 [2024-04-24 20:56:54.674825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.284 [2024-04-24 20:56:54.686897] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.284 [2024-04-24 20:56:54.687169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.284 [2024-04-24 20:56:54.687189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.284 [2024-04-24 20:56:54.699264] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.284 [2024-04-24 20:56:54.699530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.284 [2024-04-24 20:56:54.699548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.284 [2024-04-24 20:56:54.711648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.284 [2024-04-24 20:56:54.711924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.284 [2024-04-24 20:56:54.711944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.284 [2024-04-24 20:56:54.724024] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.284 [2024-04-24 20:56:54.724292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.284 [2024-04-24 20:56:54.724311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.284 [2024-04-24 20:56:54.736398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.284 [2024-04-24 20:56:54.736701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.284 [2024-04-24 20:56:54.736721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.284 [2024-04-24 20:56:54.748765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.284 [2024-04-24 20:56:54.749062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:5128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.284 [2024-04-24 20:56:54.749081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.284 [2024-04-24 20:56:54.761117] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.284 [2024-04-24 20:56:54.761396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.284 [2024-04-24 20:56:54.761415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.284 [2024-04-24 20:56:54.773481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.284 [2024-04-24 20:56:54.773745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.284 [2024-04-24 20:56:54.773766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.284 [2024-04-24 20:56:54.785844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.284 [2024-04-24 20:56:54.786115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.284 [2024-04-24 20:56:54.786137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.284 [2024-04-24 20:56:54.798202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.284 [2024-04-24 20:56:54.798501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.284 [2024-04-24 20:56:54.798520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.285 [2024-04-24 20:56:54.810574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.285 [2024-04-24 20:56:54.810880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.285 [2024-04-24 20:56:54.810900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.285 [2024-04-24 20:56:54.822955] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.285 [2024-04-24 20:56:54.823246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.285 [2024-04-24 20:56:54.823266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.285 [2024-04-24 20:56:54.835323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.285 [2024-04-24 20:56:54.835593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.285 [2024-04-24 20:56:54.835612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.285 [2024-04-24 20:56:54.847689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.285 [2024-04-24 20:56:54.847991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.285 [2024-04-24 20:56:54.848011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.285 [2024-04-24 20:56:54.860054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.285 [2024-04-24 20:56:54.860322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.285 [2024-04-24 20:56:54.860341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.285 [2024-04-24 20:56:54.872430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6b70) with pdu=0x2000190e4de8 00:25:30.285 [2024-04-24 20:56:54.872708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.285 [2024-04-24 20:56:54.872732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:30.285 00:25:30.285 Latency(us) 00:25:30.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.285 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:30.285 nvme0n1 : 2.01 20662.92 80.71 0.00 0.00 6181.45 2921.81 14854.83 00:25:30.285 =================================================================================================================== 00:25:30.285 Total : 20662.92 80.71 0.00 0.00 6181.45 2921.81 14854.83 00:25:30.285 0 00:25:30.285 20:56:54 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:30.285 20:56:54 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:30.285 20:56:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:30.285 20:56:54 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:30.285 | .driver_specific 00:25:30.285 | .nvme_error 00:25:30.285 | .status_code 00:25:30.285 | .command_transient_transport_error' 00:25:30.545 20:56:55 -- host/digest.sh@71 -- # (( 162 > 0 )) 00:25:30.545 20:56:55 -- host/digest.sh@73 -- # killprocess 2921638 00:25:30.545 20:56:55 -- common/autotest_common.sh@936 -- # '[' -z 2921638 ']' 00:25:30.545 20:56:55 -- common/autotest_common.sh@940 -- # kill -0 2921638 00:25:30.545 20:56:55 -- common/autotest_common.sh@941 -- # uname 00:25:30.545 20:56:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:30.545 20:56:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2921638 00:25:30.545 20:56:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:30.545 20:56:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:30.545 20:56:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2921638' 00:25:30.545 killing process with pid 2921638 00:25:30.545 20:56:55 -- common/autotest_common.sh@955 -- # kill 2921638 00:25:30.545 Received shutdown signal, test time was about 2.000000 seconds 00:25:30.545 00:25:30.545 Latency(us) 00:25:30.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.545 =================================================================================================================== 00:25:30.545 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:30.545 20:56:55 -- common/autotest_common.sh@960 -- # wait 2921638 00:25:30.805 20:56:55 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:30.806 20:56:55 -- host/digest.sh@54 -- # local rw bs qd 00:25:30.806 20:56:55 -- host/digest.sh@56 -- # rw=randwrite 00:25:30.806 20:56:55 -- host/digest.sh@56 -- # bs=131072 00:25:30.806 20:56:55 -- host/digest.sh@56 -- # qd=16 00:25:30.806 20:56:55 -- host/digest.sh@58 -- # bperfpid=2922304 00:25:30.806 20:56:55 -- host/digest.sh@60 -- # waitforlisten 2922304 /var/tmp/bperf.sock 00:25:30.806 20:56:55 -- common/autotest_common.sh@817 -- # '[' -z 2922304 ']' 00:25:30.806 20:56:55 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:30.806 20:56:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:30.806 20:56:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:30.806 20:56:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:30.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:30.806 20:56:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:30.806 20:56:55 -- common/autotest_common.sh@10 -- # set +x 00:25:30.806 [2024-04-24 20:56:55.340051] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:30.806 [2024-04-24 20:56:55.340108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2922304 ] 00:25:30.806 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:30.806 Zero copy mechanism will not be used. 00:25:30.806 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.806 [2024-04-24 20:56:55.396474] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.066 [2024-04-24 20:56:55.458284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.066 20:56:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:31.066 20:56:55 -- common/autotest_common.sh@850 -- # return 0 00:25:31.066 20:56:55 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:31.066 20:56:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:31.326 20:56:55 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:31.326 20:56:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.326 20:56:55 -- common/autotest_common.sh@10 -- # set +x 00:25:31.326 20:56:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.326 20:56:55 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.326 20:56:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:31.585 nvme0n1 00:25:31.585 20:56:56 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:31.585 20:56:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:31.585 20:56:56 -- common/autotest_common.sh@10 -- # set +x 00:25:31.585 20:56:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:31.585 20:56:56 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:31.585 20:56:56 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:31.585 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:31.585 Zero copy mechanism will not be used. 00:25:31.585 Running I/O for 2 seconds... 00:25:31.847 [2024-04-24 20:56:56.228938] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.847 [2024-04-24 20:56:56.229320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-04-24 20:56:56.229353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.847 [2024-04-24 20:56:56.239820] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.847 [2024-04-24 20:56:56.240173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.847 [2024-04-24 20:56:56.240199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.847 [2024-04-24 20:56:56.248771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.847 [2024-04-24 20:56:56.249169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.249192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.258569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.258934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.258956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.268943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.269276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.269298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.278064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.278330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.278355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.287595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.287960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.287982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.296501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.296892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.296914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.304906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.305162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.305183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.311189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.311424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.311445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.315983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.316295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.316317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.320276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.320611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.320632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.325125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.325346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.325366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.331145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.331442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.331463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.337022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.337375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.337398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.343149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.343485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.343507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.348542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.348775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.348796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.356559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.356875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.356896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.363105] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.363389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.363411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.371021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.371340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.371361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.379671] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.379904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.379924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.386941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.387320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.387340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.394499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.394874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.394895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.402001] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.402313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.402334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.408806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.409154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.409175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.415369] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.415739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.415760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.421903] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.422259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.422280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.429496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.429843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.429865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.434907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.435159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.435180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.848 [2024-04-24 20:56:56.443697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.848 [2024-04-24 20:56:56.443929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.848 [2024-04-24 20:56:56.443949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.849 [2024-04-24 20:56:56.448025] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.849 [2024-04-24 20:56:56.448249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.849 [2024-04-24 20:56:56.448269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.849 [2024-04-24 20:56:56.452921] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.849 [2024-04-24 20:56:56.453206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.849 [2024-04-24 20:56:56.453231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.849 [2024-04-24 20:56:56.458048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.849 [2024-04-24 20:56:56.458313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.849 [2024-04-24 20:56:56.458333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:31.849 [2024-04-24 20:56:56.465190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.849 [2024-04-24 20:56:56.465410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.849 [2024-04-24 20:56:56.465431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:31.849 [2024-04-24 20:56:56.471425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.849 [2024-04-24 20:56:56.471648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.849 [2024-04-24 20:56:56.471668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:31.849 [2024-04-24 20:56:56.480307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.849 [2024-04-24 20:56:56.480656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.849 [2024-04-24 20:56:56.480678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:31.849 [2024-04-24 20:56:56.485814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:31.849 [2024-04-24 20:56:56.486036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.849 [2024-04-24 20:56:56.486056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.110 [2024-04-24 20:56:56.490994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.110 [2024-04-24 20:56:56.491216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.110 [2024-04-24 20:56:56.491236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.110 [2024-04-24 20:56:56.497011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.110 [2024-04-24 20:56:56.497379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.110 [2024-04-24 20:56:56.497399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.110 [2024-04-24 20:56:56.501830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.110 [2024-04-24 20:56:56.502053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.110 [2024-04-24 20:56:56.502072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.110 [2024-04-24 20:56:56.508087] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.110 [2024-04-24 20:56:56.508376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.508397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.515067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.515399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.515420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.521123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.521443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.521464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.527078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.527418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.527439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.533247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.533484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.533505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.539997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.540313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.540334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.545430] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.545653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.545674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.549905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.550130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.550150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.554435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.554655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.554675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.558873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.559141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.559161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.565623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.565850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.565871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.570572] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.570799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.570819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.574612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.574842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.574862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.578536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.578765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.578785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.582800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.583019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.583040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.588487] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.588853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.588875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.594361] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.594701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.594722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.600166] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.600482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.600507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.606387] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.606610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.606631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.613485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.613744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.613766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.620356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.620691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.620713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.625584] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.625815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.625835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.630698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.630929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.630950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.636031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.636372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.636393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.641854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.642075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.642095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.648997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.649228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.649248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.654906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.655244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.111 [2024-04-24 20:56:56.655266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.111 [2024-04-24 20:56:56.659158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.111 [2024-04-24 20:56:56.659374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.659394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.663031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.663247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.663266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.667525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.667868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.667889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.671888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.672101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.672122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.677364] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.677576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.677595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.681525] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.681741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.681761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.685344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.685556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.685577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.689711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.689931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.689951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.693657] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.693874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.693894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.697378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.697587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.697607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.701095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.701306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.701326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.707119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.707329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.707349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.713707] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.714002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.714023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.718513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.718853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.718873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.722404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.722615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.722635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.727450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.727669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.727689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.732985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.733196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.733219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.737393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.737602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.737621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.112 [2024-04-24 20:56:56.744532] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.112 [2024-04-24 20:56:56.744985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.112 [2024-04-24 20:56:56.745007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.751574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.751920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.751941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.757078] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.757295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.757314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.763704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.764026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.764047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.772926] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.773330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.773350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.780373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.780753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.780774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.788017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.788340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.788361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.795278] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.795487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.795507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.799457] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.799672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.799692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.803401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.803613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.803633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.807582] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.807802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.807822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.811973] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.812181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.812201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.820164] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.820542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.820563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.827771] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.828022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.373 [2024-04-24 20:56:56.828043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.373 [2024-04-24 20:56:56.835188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.373 [2024-04-24 20:56:56.835395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.835415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.844313] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.844710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.844740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.853186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.853436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.853457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.861446] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.861795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.861816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.870435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.870781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.870802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.878136] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.878454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.878475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.886910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.887257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.887277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.897448] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.897778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.897798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.907824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.908171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.908192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.918031] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.918337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.918357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.927900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.928304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.928325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.938178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.938451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.938471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.947472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.947829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.947850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.957015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.957375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.957396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.965098] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.965430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.965451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.970837] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.971071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.971091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.975889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.976248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.976268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.980003] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.980219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.980239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.983929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.984141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.984161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.990453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.990772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.990792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:56.996738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:56.997094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:56.997115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:57.000895] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:57.001104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:57.001125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.374 [2024-04-24 20:56:57.006961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.374 [2024-04-24 20:56:57.007173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.374 [2024-04-24 20:56:57.007193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.013150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.013456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.013477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.017526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.017746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.017766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.021847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.022058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.022078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.027333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.027557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.027577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.033410] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.033787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.033818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.037652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.037993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.038015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.042270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.042676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.042696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.048055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.048429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.048451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.054564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.054895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.054916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.061589] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.061934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.061955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.068262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.068594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.068615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.073414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.073763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.073784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.078168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.078384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.078404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.083888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.084220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.084241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.092906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.093234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.093255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.099376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.099746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.099767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.105103] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.105426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.105449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.111190] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.111548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.111569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.116976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.117190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.117210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.123266] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.123481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.123501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.130382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.130706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.130734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.136824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.137218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.137239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.143045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.143262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.636 [2024-04-24 20:56:57.143282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.636 [2024-04-24 20:56:57.148641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.636 [2024-04-24 20:56:57.148994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.149015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.154905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.155239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.155260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.163045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.163371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.163391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.168908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.169232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.169254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.173112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.173327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.173347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.177957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.178169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.178189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.183761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.183971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.183990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.189948] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.190173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.190197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.197433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.197839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.197860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.204541] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.204777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.204797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.211125] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.211430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.211451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.217081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.217428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.217449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.222805] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.223142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.223163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.227020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.227369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.227390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.234174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.234481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.234500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.240557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.240882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.240903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.247010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.247224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.247244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.251497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.251762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.251783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.257624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.257936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.257957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.265390] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.265692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.265713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.637 [2024-04-24 20:56:57.272959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.637 [2024-04-24 20:56:57.273467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.637 [2024-04-24 20:56:57.273489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.898 [2024-04-24 20:56:57.280272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.898 [2024-04-24 20:56:57.280609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.898 [2024-04-24 20:56:57.280630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.898 [2024-04-24 20:56:57.288328] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.898 [2024-04-24 20:56:57.288692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.898 [2024-04-24 20:56:57.288713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.898 [2024-04-24 20:56:57.296459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.898 [2024-04-24 20:56:57.296809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.898 [2024-04-24 20:56:57.296830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.898 [2024-04-24 20:56:57.303402] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.898 [2024-04-24 20:56:57.303710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.898 [2024-04-24 20:56:57.303740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.898 [2024-04-24 20:56:57.311283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.898 [2024-04-24 20:56:57.311494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.311514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.317779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.318009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.318029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.323204] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.323558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.323579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.328089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.328304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.328323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.332347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.332563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.332583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.337216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.337573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.337594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.343199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.343517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.343538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.348899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.349129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.349151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.357054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.357148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.357168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.364155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.364524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.364545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.369376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.369590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.369610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.374072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.374429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.374450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.378810] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.379027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.379047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.383220] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.383434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.383453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.387790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.388182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.388203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.393213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.393531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.393552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.398365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.398628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.398650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.404367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.404591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.404611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.412495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.412831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.412852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.422028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.422349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.422370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.430417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.430759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.430779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.438900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.439121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.439141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.446506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.446873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.446894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.454355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.454722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.454749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.462900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.463263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.463284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.469918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.470218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.470243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.478453] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.478662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.478683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.485741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.486180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.486202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.494323] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.494667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.494687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.503037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.503337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.503358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.510965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.511193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.511212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.518857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.519195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.519216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.527488] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.527914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.527936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:32.899 [2024-04-24 20:56:57.537699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:32.899 [2024-04-24 20:56:57.537966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:32.899 [2024-04-24 20:56:57.537987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.160 [2024-04-24 20:56:57.548628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.160 [2024-04-24 20:56:57.549019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.160 [2024-04-24 20:56:57.549040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.160 [2024-04-24 20:56:57.559624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.160 [2024-04-24 20:56:57.560145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.160 [2024-04-24 20:56:57.560166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.160 [2024-04-24 20:56:57.569006] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.160 [2024-04-24 20:56:57.569413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.160 [2024-04-24 20:56:57.569434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.160 [2024-04-24 20:56:57.577499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.160 [2024-04-24 20:56:57.577926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.160 [2024-04-24 20:56:57.577947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.160 [2024-04-24 20:56:57.586762] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.160 [2024-04-24 20:56:57.587172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.160 [2024-04-24 20:56:57.587194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.160 [2024-04-24 20:56:57.598130] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.160 [2024-04-24 20:56:57.598504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.160 [2024-04-24 20:56:57.598524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.160 [2024-04-24 20:56:57.608843] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.160 [2024-04-24 20:56:57.609186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.160 [2024-04-24 20:56:57.609208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.160 [2024-04-24 20:56:57.620442] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.160 [2024-04-24 20:56:57.620773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.160 [2024-04-24 20:56:57.620794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.160 [2024-04-24 20:56:57.629472] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.160 [2024-04-24 20:56:57.629786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.160 [2024-04-24 20:56:57.629806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.160 [2024-04-24 20:56:57.640439] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.160 [2024-04-24 20:56:57.640809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.160 [2024-04-24 20:56:57.640830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.160 [2024-04-24 20:56:57.648838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.160 [2024-04-24 20:56:57.649055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.160 [2024-04-24 20:56:57.649075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.160 [2024-04-24 20:56:57.654689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.655071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.655091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.659678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.660008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.660029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.665625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.666028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.666050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.672451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.672659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.672679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.679265] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.679587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.679608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.690550] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.690959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.690980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.700555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.700996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.701020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.711281] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.711723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.711749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.721759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.722131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.722153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.729576] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.729965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.729987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.739372] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.739673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.739694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.747908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.748249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.748270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.756858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.757084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.757105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.764013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.764339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.764360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.774268] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.774629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.774650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.781868] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.782253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.782274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.788407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.788618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.788638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.794464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.794800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.161 [2024-04-24 20:56:57.794821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.161 [2024-04-24 20:56:57.799838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.161 [2024-04-24 20:56:57.800200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.800221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.805194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.805433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.805453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.811099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.811426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.811447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.816484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.816695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.816715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.822431] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.822647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.822667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.829883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.830227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.830249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.838344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.838560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.838580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.845089] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.845437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.845458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.850889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.851106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.851126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.856537] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.856870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.856891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.863045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.863384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.863405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.869187] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.869515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.869536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.875178] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.875405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.875425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.880540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.880901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.880924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.886785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.887160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.887184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.894793] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.895019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.895039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.902013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.902226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.902246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.906854] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.907067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.907087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.914119] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.914350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.914370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.423 [2024-04-24 20:56:57.920115] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.423 [2024-04-24 20:56:57.920452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.423 [2024-04-24 20:56:57.920472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:57.928851] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:57.929228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:57.929248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:57.935502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:57.935820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:57.935840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:57.941778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:57.942059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:57.942080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:57.947892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:57.948266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:57.948287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:57.952838] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:57.953156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:57.953177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:57.959975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:57.960339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:57.960360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:57.968618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:57.968885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:57.968906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:57.978146] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:57.978389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:57.978410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:57.986781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:57.987063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:57.987084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:57.995185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:57.995574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:57.995594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:58.003906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:58.004255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:58.004275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:58.010674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:58.011020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:58.011047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:58.015179] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:58.015515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:58.015536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:58.019599] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:58.019981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:58.020002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:58.024391] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:58.024764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:58.024785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:58.029181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:58.029394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:58.029413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:58.036850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:58.037061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:58.037081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:58.045753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:58.046027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:58.046048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.424 [2024-04-24 20:56:58.054738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.424 [2024-04-24 20:56:58.055020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.424 [2024-04-24 20:56:58.055041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.684 [2024-04-24 20:56:58.065649] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.684 [2024-04-24 20:56:58.066084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.684 [2024-04-24 20:56:58.066104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.684 [2024-04-24 20:56:58.076097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.684 [2024-04-24 20:56:58.076442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.684 [2024-04-24 20:56:58.076462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.684 [2024-04-24 20:56:58.086614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.684 [2024-04-24 20:56:58.086945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.684 [2024-04-24 20:56:58.086966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.684 [2024-04-24 20:56:58.096984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.684 [2024-04-24 20:56:58.097266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.684 [2024-04-24 20:56:58.097287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.684 [2024-04-24 20:56:58.108297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.684 [2024-04-24 20:56:58.108642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.108662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.118997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.119318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.119339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.126555] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.126774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.126794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.134491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.134847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.134869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.143491] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.143883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.143905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.151558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.151874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.151895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.159787] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.160158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.160179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.167936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.168250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.168271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.176595] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.176932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.176953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.185443] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.185717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.185744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.194176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.194507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.194528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.201961] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.202346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.202367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.209227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.209442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.209462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.214017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.214233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.214252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:33.685 [2024-04-24 20:56:58.218512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ca6eb0) with pdu=0x2000190fef90 00:25:33.685 [2024-04-24 20:56:58.218732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.685 [2024-04-24 20:56:58.218756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:33.685 00:25:33.685 Latency(us) 00:25:33.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.685 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:33.685 nvme0n1 : 2.00 4512.83 564.10 0.00 0.00 3538.68 1720.32 11578.03 00:25:33.685 =================================================================================================================== 00:25:33.685 Total : 4512.83 564.10 0.00 0.00 3538.68 1720.32 11578.03 00:25:33.685 0 00:25:33.685 20:56:58 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:33.685 20:56:58 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:33.685 20:56:58 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:33.685 | .driver_specific 00:25:33.685 | .nvme_error 00:25:33.685 | .status_code 00:25:33.685 | .command_transient_transport_error' 00:25:33.685 20:56:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:33.946 20:56:58 -- host/digest.sh@71 -- # (( 291 > 0 )) 00:25:33.946 20:56:58 -- host/digest.sh@73 -- # killprocess 2922304 00:25:33.946 20:56:58 -- common/autotest_common.sh@936 -- # '[' -z 2922304 ']' 00:25:33.946 20:56:58 -- common/autotest_common.sh@940 -- # kill -0 2922304 00:25:33.946 20:56:58 -- common/autotest_common.sh@941 -- # uname 00:25:33.946 20:56:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:33.946 20:56:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2922304 00:25:33.946 20:56:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:33.946 20:56:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:33.946 20:56:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2922304' 00:25:33.946 killing process with pid 2922304 00:25:33.946 20:56:58 -- common/autotest_common.sh@955 -- # kill 2922304 00:25:33.946 Received shutdown signal, test time was about 2.000000 seconds 00:25:33.946 00:25:33.946 Latency(us) 00:25:33.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.946 =================================================================================================================== 00:25:33.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.946 20:56:58 -- common/autotest_common.sh@960 -- # wait 2922304 00:25:34.205 20:56:58 -- host/digest.sh@116 -- # killprocess 2920232 00:25:34.205 20:56:58 -- common/autotest_common.sh@936 -- # '[' -z 2920232 ']' 00:25:34.205 20:56:58 -- common/autotest_common.sh@940 -- # kill -0 2920232 00:25:34.205 20:56:58 -- common/autotest_common.sh@941 -- # uname 00:25:34.205 20:56:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:34.205 20:56:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2920232 00:25:34.206 20:56:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:34.206 20:56:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:34.206 20:56:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2920232' 00:25:34.206 killing process with pid 2920232 00:25:34.206 20:56:58 -- common/autotest_common.sh@955 -- # kill 2920232 00:25:34.206 20:56:58 -- common/autotest_common.sh@960 -- # wait 2920232 00:25:34.206 00:25:34.206 real 0m14.686s 00:25:34.206 user 0m29.086s 00:25:34.206 sys 0m3.366s 00:25:34.206 20:56:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:34.206 20:56:58 -- common/autotest_common.sh@10 -- # set +x 00:25:34.206 ************************************ 00:25:34.206 END TEST nvmf_digest_error 00:25:34.206 ************************************ 00:25:34.465 20:56:58 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:34.466 20:56:58 -- host/digest.sh@150 -- # nvmftestfini 00:25:34.466 20:56:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:34.466 20:56:58 -- nvmf/common.sh@117 -- # sync 00:25:34.466 20:56:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:34.466 20:56:58 -- nvmf/common.sh@120 -- # set +e 00:25:34.466 20:56:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:34.466 20:56:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:34.466 rmmod nvme_tcp 00:25:34.466 rmmod nvme_fabrics 00:25:34.466 rmmod nvme_keyring 00:25:34.466 20:56:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:34.466 20:56:58 -- nvmf/common.sh@124 -- # set -e 00:25:34.466 20:56:58 -- nvmf/common.sh@125 -- # return 0 00:25:34.466 20:56:58 -- nvmf/common.sh@478 -- # '[' -n 2920232 ']' 00:25:34.466 20:56:58 -- nvmf/common.sh@479 -- # killprocess 2920232 00:25:34.466 20:56:58 -- common/autotest_common.sh@936 -- # '[' -z 2920232 ']' 00:25:34.466 20:56:58 -- common/autotest_common.sh@940 -- # kill -0 2920232 00:25:34.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2920232) - No such process 00:25:34.466 20:56:58 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2920232 is not found' 00:25:34.466 Process with pid 2920232 is not found 00:25:34.466 20:56:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:34.466 20:56:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:34.466 20:56:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:34.466 20:56:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:34.466 20:56:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:34.466 20:56:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.466 20:56:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.466 20:56:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.378 20:57:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:36.378 00:25:36.378 real 0m38.948s 00:25:36.378 user 0m59.695s 00:25:36.378 sys 0m12.309s 00:25:36.378 20:57:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:36.378 20:57:00 -- common/autotest_common.sh@10 -- # set +x 00:25:36.378 ************************************ 00:25:36.378 END TEST nvmf_digest 00:25:36.378 ************************************ 00:25:36.638 20:57:01 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:25:36.638 20:57:01 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:25:36.638 20:57:01 -- nvmf/nvmf.sh@118 -- # [[ phy == phy ]] 00:25:36.638 20:57:01 -- nvmf/nvmf.sh@119 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:36.638 20:57:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:36.638 20:57:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:36.638 20:57:01 -- common/autotest_common.sh@10 -- # set +x 00:25:36.638 ************************************ 00:25:36.638 START TEST nvmf_bdevperf 00:25:36.638 ************************************ 00:25:36.638 20:57:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:36.899 * Looking for test storage... 00:25:36.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.899 20:57:01 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.899 20:57:01 -- nvmf/common.sh@7 -- # uname -s 00:25:36.899 20:57:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.899 20:57:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.899 20:57:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.899 20:57:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.899 20:57:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.899 20:57:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.899 20:57:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.899 20:57:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.899 20:57:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.899 20:57:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.899 20:57:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:36.899 20:57:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:25:36.899 20:57:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.899 20:57:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.899 20:57:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.899 20:57:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.899 20:57:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.899 20:57:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.899 20:57:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.899 20:57:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.899 20:57:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.899 20:57:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.899 20:57:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.899 20:57:01 -- paths/export.sh@5 -- # export PATH 00:25:36.899 20:57:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.899 20:57:01 -- nvmf/common.sh@47 -- # : 0 00:25:36.899 20:57:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:36.899 20:57:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:36.899 20:57:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:36.899 20:57:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.899 20:57:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.899 20:57:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:36.899 20:57:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:36.899 20:57:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:36.899 20:57:01 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:36.899 20:57:01 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:36.899 20:57:01 -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:36.899 20:57:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:36.899 20:57:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.899 20:57:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:36.899 20:57:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:36.899 20:57:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:36.899 20:57:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.899 20:57:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.899 20:57:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.899 20:57:01 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:25:36.899 20:57:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:25:36.899 20:57:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:25:36.899 20:57:01 -- common/autotest_common.sh@10 -- # set +x 00:25:45.035 20:57:08 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:45.035 20:57:08 -- nvmf/common.sh@291 -- # pci_devs=() 00:25:45.035 20:57:08 -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:45.035 20:57:08 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:45.035 20:57:08 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:45.035 20:57:08 -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:45.035 20:57:08 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:45.035 20:57:08 -- nvmf/common.sh@295 -- # net_devs=() 00:25:45.035 20:57:08 -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:45.035 20:57:08 -- nvmf/common.sh@296 -- # e810=() 00:25:45.035 20:57:08 -- nvmf/common.sh@296 -- # local -ga e810 00:25:45.035 20:57:08 -- nvmf/common.sh@297 -- # x722=() 00:25:45.035 20:57:08 -- nvmf/common.sh@297 -- # local -ga x722 00:25:45.035 20:57:08 -- nvmf/common.sh@298 -- # mlx=() 00:25:45.035 20:57:08 -- nvmf/common.sh@298 -- # local -ga mlx 00:25:45.035 20:57:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.035 20:57:08 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.035 20:57:08 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.035 20:57:08 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.035 20:57:08 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.035 20:57:08 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.035 20:57:08 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.035 20:57:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.035 20:57:08 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.035 20:57:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.035 20:57:08 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.035 20:57:08 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:45.035 20:57:08 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:45.035 20:57:08 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:45.035 20:57:08 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:45.035 20:57:08 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:45.035 20:57:08 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:45.035 20:57:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.035 20:57:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:45.035 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:45.035 20:57:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.035 20:57:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.035 20:57:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.035 20:57:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.035 20:57:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.035 20:57:08 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.035 20:57:08 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:45.035 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:45.035 20:57:08 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.035 20:57:08 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.035 20:57:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.035 20:57:08 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.036 20:57:08 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.036 20:57:08 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:45.036 20:57:08 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:45.036 20:57:08 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:45.036 20:57:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.036 20:57:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.036 20:57:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:45.036 20:57:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.036 20:57:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:45.036 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:45.036 20:57:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.036 20:57:08 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.036 20:57:08 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.036 20:57:08 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:25:45.036 20:57:08 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.036 20:57:08 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:45.036 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:45.036 20:57:08 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.036 20:57:08 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:25:45.036 20:57:08 -- nvmf/common.sh@403 -- # is_hw=yes 00:25:45.036 20:57:08 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:25:45.036 20:57:08 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:25:45.036 20:57:08 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:25:45.036 20:57:08 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.036 20:57:08 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.036 20:57:08 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.036 20:57:08 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:45.036 20:57:08 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.036 20:57:08 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.036 20:57:08 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:45.036 20:57:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.036 20:57:08 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.036 20:57:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:45.036 20:57:08 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:45.036 20:57:08 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.036 20:57:08 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.036 20:57:08 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.036 20:57:08 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.036 20:57:08 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:45.036 20:57:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.036 20:57:08 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.036 20:57:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.036 20:57:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:45.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:25:45.036 00:25:45.036 --- 10.0.0.2 ping statistics --- 00:25:45.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.036 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:25:45.036 20:57:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:25:45.036 00:25:45.036 --- 10.0.0.1 ping statistics --- 00:25:45.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.036 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:25:45.036 20:57:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.036 20:57:08 -- nvmf/common.sh@411 -- # return 0 00:25:45.036 20:57:08 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:45.036 20:57:08 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.036 20:57:08 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:45.036 20:57:08 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:45.036 20:57:08 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.036 20:57:08 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:45.036 20:57:08 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:45.036 20:57:08 -- host/bdevperf.sh@25 -- # tgt_init 00:25:45.036 20:57:08 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:45.036 20:57:08 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:45.036 20:57:08 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:45.036 20:57:08 -- common/autotest_common.sh@10 -- # set +x 00:25:45.036 20:57:08 -- nvmf/common.sh@470 -- # nvmfpid=2927289 00:25:45.036 20:57:08 -- nvmf/common.sh@471 -- # waitforlisten 2927289 00:25:45.036 20:57:08 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:45.036 20:57:08 -- common/autotest_common.sh@817 -- # '[' -z 2927289 ']' 00:25:45.036 20:57:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.036 20:57:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:45.036 20:57:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.036 20:57:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:45.036 20:57:08 -- common/autotest_common.sh@10 -- # set +x 00:25:45.036 [2024-04-24 20:57:08.702612] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:45.036 [2024-04-24 20:57:08.702678] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.036 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.036 [2024-04-24 20:57:08.772516] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:45.036 [2024-04-24 20:57:08.846765] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.036 [2024-04-24 20:57:08.846804] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.036 [2024-04-24 20:57:08.846813] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.036 [2024-04-24 20:57:08.846819] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.036 [2024-04-24 20:57:08.846825] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.036 [2024-04-24 20:57:08.846975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.036 [2024-04-24 20:57:08.847104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.036 [2024-04-24 20:57:08.847105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:45.036 20:57:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:45.036 20:57:09 -- common/autotest_common.sh@850 -- # return 0 00:25:45.036 20:57:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:45.036 20:57:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:45.036 20:57:09 -- common/autotest_common.sh@10 -- # set +x 00:25:45.036 20:57:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:45.036 20:57:09 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:45.036 20:57:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.036 20:57:09 -- common/autotest_common.sh@10 -- # set +x 00:25:45.036 [2024-04-24 20:57:09.574920] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.036 20:57:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.036 20:57:09 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:45.036 20:57:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.036 20:57:09 -- common/autotest_common.sh@10 -- # set +x 00:25:45.036 Malloc0 00:25:45.036 20:57:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.036 20:57:09 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:45.036 20:57:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.036 20:57:09 -- common/autotest_common.sh@10 -- # set +x 00:25:45.036 20:57:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.036 20:57:09 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:45.036 20:57:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.036 20:57:09 -- common/autotest_common.sh@10 -- # set +x 00:25:45.036 20:57:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.036 20:57:09 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.036 20:57:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:45.036 20:57:09 -- common/autotest_common.sh@10 -- # set +x 00:25:45.036 [2024-04-24 20:57:09.640093] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.036 20:57:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:45.036 20:57:09 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:45.036 20:57:09 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:45.036 20:57:09 -- nvmf/common.sh@521 -- # config=() 00:25:45.036 20:57:09 -- nvmf/common.sh@521 -- # local subsystem config 00:25:45.036 20:57:09 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:45.036 20:57:09 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:45.036 { 00:25:45.036 "params": { 00:25:45.036 "name": "Nvme$subsystem", 00:25:45.036 "trtype": "$TEST_TRANSPORT", 00:25:45.036 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:45.036 "adrfam": "ipv4", 00:25:45.036 "trsvcid": "$NVMF_PORT", 00:25:45.036 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:45.036 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:45.036 "hdgst": ${hdgst:-false}, 00:25:45.036 "ddgst": ${ddgst:-false} 00:25:45.036 }, 00:25:45.036 "method": "bdev_nvme_attach_controller" 00:25:45.036 } 00:25:45.036 EOF 00:25:45.036 )") 00:25:45.036 20:57:09 -- nvmf/common.sh@543 -- # cat 00:25:45.036 20:57:09 -- nvmf/common.sh@545 -- # jq . 00:25:45.036 20:57:09 -- nvmf/common.sh@546 -- # IFS=, 00:25:45.036 20:57:09 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:45.036 "params": { 00:25:45.036 "name": "Nvme1", 00:25:45.036 "trtype": "tcp", 00:25:45.036 "traddr": "10.0.0.2", 00:25:45.036 "adrfam": "ipv4", 00:25:45.036 "trsvcid": "4420", 00:25:45.036 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:45.036 "hdgst": false, 00:25:45.037 "ddgst": false 00:25:45.037 }, 00:25:45.037 "method": "bdev_nvme_attach_controller" 00:25:45.037 }' 00:25:45.296 [2024-04-24 20:57:09.694483] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:45.296 [2024-04-24 20:57:09.694530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927692 ] 00:25:45.296 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.296 [2024-04-24 20:57:09.769784] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.296 [2024-04-24 20:57:09.832057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.574 Running I/O for 1 seconds... 00:25:46.549 00:25:46.549 Latency(us) 00:25:46.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:46.549 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:46.549 Verification LBA range: start 0x0 length 0x4000 00:25:46.549 Nvme1n1 : 1.01 8910.77 34.81 0.00 0.00 14306.12 3181.23 15400.96 00:25:46.549 =================================================================================================================== 00:25:46.549 Total : 8910.77 34.81 0.00 0.00 14306.12 3181.23 15400.96 00:25:46.549 20:57:11 -- host/bdevperf.sh@30 -- # bdevperfpid=2928245 00:25:46.549 20:57:11 -- host/bdevperf.sh@32 -- # sleep 3 00:25:46.549 20:57:11 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:46.549 20:57:11 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:46.549 20:57:11 -- nvmf/common.sh@521 -- # config=() 00:25:46.549 20:57:11 -- nvmf/common.sh@521 -- # local subsystem config 00:25:46.549 20:57:11 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:25:46.549 20:57:11 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:25:46.549 { 00:25:46.549 "params": { 00:25:46.549 "name": "Nvme$subsystem", 00:25:46.549 "trtype": "$TEST_TRANSPORT", 00:25:46.549 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:46.549 "adrfam": "ipv4", 00:25:46.549 "trsvcid": "$NVMF_PORT", 00:25:46.549 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:46.549 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:46.549 "hdgst": ${hdgst:-false}, 00:25:46.549 "ddgst": ${ddgst:-false} 00:25:46.549 }, 00:25:46.549 "method": "bdev_nvme_attach_controller" 00:25:46.549 } 00:25:46.549 EOF 00:25:46.549 )") 00:25:46.549 20:57:11 -- nvmf/common.sh@543 -- # cat 00:25:46.549 20:57:11 -- nvmf/common.sh@545 -- # jq . 00:25:46.549 20:57:11 -- nvmf/common.sh@546 -- # IFS=, 00:25:46.549 20:57:11 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:25:46.549 "params": { 00:25:46.549 "name": "Nvme1", 00:25:46.549 "trtype": "tcp", 00:25:46.549 "traddr": "10.0.0.2", 00:25:46.549 "adrfam": "ipv4", 00:25:46.549 "trsvcid": "4420", 00:25:46.549 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:46.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:46.550 "hdgst": false, 00:25:46.550 "ddgst": false 00:25:46.550 }, 00:25:46.550 "method": "bdev_nvme_attach_controller" 00:25:46.550 }' 00:25:46.809 [2024-04-24 20:57:11.173313] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:46.809 [2024-04-24 20:57:11.173387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2928245 ] 00:25:46.809 EAL: No free 2048 kB hugepages reported on node 1 00:25:46.809 [2024-04-24 20:57:11.249252] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.809 [2024-04-24 20:57:11.312125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.069 Running I/O for 15 seconds... 00:25:49.619 20:57:14 -- host/bdevperf.sh@33 -- # kill -9 2927289 00:25:49.619 20:57:14 -- host/bdevperf.sh@35 -- # sleep 3 00:25:49.619 [2024-04-24 20:57:14.131856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.131900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.131923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.131939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.131949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.131957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.131969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.131980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.131991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:56120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:56128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:56136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:56144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:56160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:56168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.619 [2024-04-24 20:57:14.132499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.619 [2024-04-24 20:57:14.132508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.132988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.132995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.133005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.133012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.133021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.133028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.133037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.133044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.133053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.133060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.133070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.133078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.133087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.133095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.133103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.133110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.133120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.133127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.620 [2024-04-24 20:57:14.133137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.620 [2024-04-24 20:57:14.133143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:56768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.621 [2024-04-24 20:57:14.133794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.621 [2024-04-24 20:57:14.133801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.133810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.133817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.133826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.133833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.133842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.133849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.133858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.133865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.133874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.133881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.133889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.133896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.133908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.133915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.133924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.133931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.133940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.133947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.133957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.133964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.133973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.133980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.133989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.133996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.134008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.134015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.134024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.134031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.134040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.134047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.134057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.134064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.134073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.134080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.134089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:49.622 [2024-04-24 20:57:14.134095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.134104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e2280 is same with the state(5) to be set 00:25:49.622 [2024-04-24 20:57:14.134114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:49.622 [2024-04-24 20:57:14.134120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:49.622 [2024-04-24 20:57:14.134127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57096 len:8 PRP1 0x0 PRP2 0x0 00:25:49.622 [2024-04-24 20:57:14.134134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.622 [2024-04-24 20:57:14.134173] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24e2280 was disconnected and freed. reset controller. 00:25:49.622 [2024-04-24 20:57:14.137749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.622 [2024-04-24 20:57:14.137796] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.622 [2024-04-24 20:57:14.138601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.622 [2024-04-24 20:57:14.139065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.622 [2024-04-24 20:57:14.139102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.622 [2024-04-24 20:57:14.139113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.622 [2024-04-24 20:57:14.139350] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.622 [2024-04-24 20:57:14.139568] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.622 [2024-04-24 20:57:14.139577] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.622 [2024-04-24 20:57:14.139585] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.622 [2024-04-24 20:57:14.143063] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.622 [2024-04-24 20:57:14.151870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.622 [2024-04-24 20:57:14.152528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.622 [2024-04-24 20:57:14.152894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.622 [2024-04-24 20:57:14.152910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.622 [2024-04-24 20:57:14.152920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.622 [2024-04-24 20:57:14.153154] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.622 [2024-04-24 20:57:14.153372] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.622 [2024-04-24 20:57:14.153381] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.622 [2024-04-24 20:57:14.153389] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.622 [2024-04-24 20:57:14.156869] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.622 [2024-04-24 20:57:14.165676] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.622 [2024-04-24 20:57:14.166257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.622 [2024-04-24 20:57:14.166600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.622 [2024-04-24 20:57:14.166611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.622 [2024-04-24 20:57:14.166619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.622 [2024-04-24 20:57:14.166845] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.622 [2024-04-24 20:57:14.167061] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.622 [2024-04-24 20:57:14.167071] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.622 [2024-04-24 20:57:14.167078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.622 [2024-04-24 20:57:14.170550] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.622 [2024-04-24 20:57:14.179573] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.622 [2024-04-24 20:57:14.180206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.622 [2024-04-24 20:57:14.180548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.622 [2024-04-24 20:57:14.180563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.622 [2024-04-24 20:57:14.180573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.622 [2024-04-24 20:57:14.180818] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.622 [2024-04-24 20:57:14.181037] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.622 [2024-04-24 20:57:14.181048] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.622 [2024-04-24 20:57:14.181055] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.622 [2024-04-24 20:57:14.184537] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.622 [2024-04-24 20:57:14.193350] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.622 [2024-04-24 20:57:14.193840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.622 [2024-04-24 20:57:14.194239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.623 [2024-04-24 20:57:14.194253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.623 [2024-04-24 20:57:14.194262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.623 [2024-04-24 20:57:14.194496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.623 [2024-04-24 20:57:14.194715] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.623 [2024-04-24 20:57:14.194723] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.623 [2024-04-24 20:57:14.194738] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.623 [2024-04-24 20:57:14.198211] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.623 [2024-04-24 20:57:14.207212] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.623 [2024-04-24 20:57:14.207742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.623 [2024-04-24 20:57:14.208180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.623 [2024-04-24 20:57:14.208219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.623 [2024-04-24 20:57:14.208229] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.623 [2024-04-24 20:57:14.208463] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.623 [2024-04-24 20:57:14.208686] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.623 [2024-04-24 20:57:14.208696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.623 [2024-04-24 20:57:14.208703] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.623 [2024-04-24 20:57:14.212181] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.623 [2024-04-24 20:57:14.220979] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.623 [2024-04-24 20:57:14.221636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.623 [2024-04-24 20:57:14.221997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.623 [2024-04-24 20:57:14.222012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.623 [2024-04-24 20:57:14.222022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.623 [2024-04-24 20:57:14.222255] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.623 [2024-04-24 20:57:14.222474] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.623 [2024-04-24 20:57:14.222482] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.623 [2024-04-24 20:57:14.222490] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.623 [2024-04-24 20:57:14.225967] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.623 [2024-04-24 20:57:14.234765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.623 [2024-04-24 20:57:14.235337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.623 [2024-04-24 20:57:14.235686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.623 [2024-04-24 20:57:14.235697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.623 [2024-04-24 20:57:14.235705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.623 [2024-04-24 20:57:14.235925] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.623 [2024-04-24 20:57:14.236140] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.623 [2024-04-24 20:57:14.236150] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.623 [2024-04-24 20:57:14.236157] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.623 [2024-04-24 20:57:14.239622] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.623 [2024-04-24 20:57:14.248612] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.623 [2024-04-24 20:57:14.249142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.623 [2024-04-24 20:57:14.249475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.623 [2024-04-24 20:57:14.249485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.623 [2024-04-24 20:57:14.249493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.623 [2024-04-24 20:57:14.249707] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.623 [2024-04-24 20:57:14.249926] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.623 [2024-04-24 20:57:14.249941] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.623 [2024-04-24 20:57:14.249948] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.623 [2024-04-24 20:57:14.253413] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.889 [2024-04-24 20:57:14.262404] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.889 [2024-04-24 20:57:14.262920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-04-24 20:57:14.263252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-04-24 20:57:14.263263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.889 [2024-04-24 20:57:14.263271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.889 [2024-04-24 20:57:14.263485] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.889 [2024-04-24 20:57:14.263700] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.889 [2024-04-24 20:57:14.263710] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.889 [2024-04-24 20:57:14.263717] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.889 [2024-04-24 20:57:14.267183] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.889 [2024-04-24 20:57:14.276172] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.889 [2024-04-24 20:57:14.276700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-04-24 20:57:14.276943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.889 [2024-04-24 20:57:14.276955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.889 [2024-04-24 20:57:14.276963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.889 [2024-04-24 20:57:14.277177] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.889 [2024-04-24 20:57:14.277392] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.889 [2024-04-24 20:57:14.277400] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.889 [2024-04-24 20:57:14.277408] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.889 [2024-04-24 20:57:14.280891] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.890 [2024-04-24 20:57:14.289888] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.890 [2024-04-24 20:57:14.290538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.290775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.290792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.890 [2024-04-24 20:57:14.290802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.890 [2024-04-24 20:57:14.291039] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.890 [2024-04-24 20:57:14.291259] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.890 [2024-04-24 20:57:14.291269] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.890 [2024-04-24 20:57:14.291281] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.890 [2024-04-24 20:57:14.294761] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.890 [2024-04-24 20:57:14.303760] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.890 [2024-04-24 20:57:14.304411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.304741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.304756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.890 [2024-04-24 20:57:14.304766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.890 [2024-04-24 20:57:14.305004] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.890 [2024-04-24 20:57:14.305222] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.890 [2024-04-24 20:57:14.305231] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.890 [2024-04-24 20:57:14.305239] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.890 [2024-04-24 20:57:14.308715] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.890 [2024-04-24 20:57:14.317531] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.890 [2024-04-24 20:57:14.318208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.318573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.318588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.890 [2024-04-24 20:57:14.318598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.890 [2024-04-24 20:57:14.318848] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.890 [2024-04-24 20:57:14.319069] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.890 [2024-04-24 20:57:14.319078] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.890 [2024-04-24 20:57:14.319086] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.890 [2024-04-24 20:57:14.322564] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.890 [2024-04-24 20:57:14.331362] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.890 [2024-04-24 20:57:14.332028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.332401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.332416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.890 [2024-04-24 20:57:14.332427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.890 [2024-04-24 20:57:14.332667] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.890 [2024-04-24 20:57:14.332900] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.890 [2024-04-24 20:57:14.332911] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.890 [2024-04-24 20:57:14.332919] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.890 [2024-04-24 20:57:14.336414] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.890 [2024-04-24 20:57:14.345215] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.890 [2024-04-24 20:57:14.345884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.346298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.346314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.890 [2024-04-24 20:57:14.346324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.890 [2024-04-24 20:57:14.346566] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.890 [2024-04-24 20:57:14.346799] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.890 [2024-04-24 20:57:14.346810] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.890 [2024-04-24 20:57:14.346818] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.890 [2024-04-24 20:57:14.350303] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.890 [2024-04-24 20:57:14.359133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.890 [2024-04-24 20:57:14.359718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.360062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.360075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.890 [2024-04-24 20:57:14.360083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.890 [2024-04-24 20:57:14.360301] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.890 [2024-04-24 20:57:14.360518] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.890 [2024-04-24 20:57:14.360529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.890 [2024-04-24 20:57:14.360537] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.890 [2024-04-24 20:57:14.364026] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.890 [2024-04-24 20:57:14.373057] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.890 [2024-04-24 20:57:14.373628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.373957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.890 [2024-04-24 20:57:14.373971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.890 [2024-04-24 20:57:14.373980] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.890 [2024-04-24 20:57:14.374198] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.890 [2024-04-24 20:57:14.374415] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.891 [2024-04-24 20:57:14.374425] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.891 [2024-04-24 20:57:14.374433] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.891 [2024-04-24 20:57:14.377929] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.891 [2024-04-24 20:57:14.386793] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.891 [2024-04-24 20:57:14.387385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.387764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.387779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.891 [2024-04-24 20:57:14.387789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.891 [2024-04-24 20:57:14.388007] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.891 [2024-04-24 20:57:14.388224] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.891 [2024-04-24 20:57:14.388245] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.891 [2024-04-24 20:57:14.388253] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.891 [2024-04-24 20:57:14.391758] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.891 [2024-04-24 20:57:14.400594] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.891 [2024-04-24 20:57:14.401167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.401526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.401539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.891 [2024-04-24 20:57:14.401547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.891 [2024-04-24 20:57:14.401777] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.891 [2024-04-24 20:57:14.401997] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.891 [2024-04-24 20:57:14.402007] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.891 [2024-04-24 20:57:14.402015] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.891 [2024-04-24 20:57:14.405507] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.891 [2024-04-24 20:57:14.414349] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.891 [2024-04-24 20:57:14.415039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.415476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.415494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.891 [2024-04-24 20:57:14.415505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.891 [2024-04-24 20:57:14.415766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.891 [2024-04-24 20:57:14.415989] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.891 [2024-04-24 20:57:14.416000] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.891 [2024-04-24 20:57:14.416008] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.891 [2024-04-24 20:57:14.419492] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.891 [2024-04-24 20:57:14.428094] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.891 [2024-04-24 20:57:14.428782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.429224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.429241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.891 [2024-04-24 20:57:14.429253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.891 [2024-04-24 20:57:14.429502] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.891 [2024-04-24 20:57:14.429743] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.891 [2024-04-24 20:57:14.429754] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.891 [2024-04-24 20:57:14.429763] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.891 [2024-04-24 20:57:14.433265] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.891 [2024-04-24 20:57:14.441888] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.891 [2024-04-24 20:57:14.442586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.442971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.442991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.891 [2024-04-24 20:57:14.443003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.891 [2024-04-24 20:57:14.443252] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.891 [2024-04-24 20:57:14.443475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.891 [2024-04-24 20:57:14.443486] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.891 [2024-04-24 20:57:14.443494] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.891 [2024-04-24 20:57:14.447008] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.891 [2024-04-24 20:57:14.455629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.891 [2024-04-24 20:57:14.456258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.456614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.456627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.891 [2024-04-24 20:57:14.456636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.891 [2024-04-24 20:57:14.456862] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.891 [2024-04-24 20:57:14.457081] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.891 [2024-04-24 20:57:14.457090] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.891 [2024-04-24 20:57:14.457098] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.891 [2024-04-24 20:57:14.460588] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.891 [2024-04-24 20:57:14.469410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.891 [2024-04-24 20:57:14.470113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.470562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.891 [2024-04-24 20:57:14.470579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.891 [2024-04-24 20:57:14.470590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.891 [2024-04-24 20:57:14.470856] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.892 [2024-04-24 20:57:14.471080] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.892 [2024-04-24 20:57:14.471090] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.892 [2024-04-24 20:57:14.471098] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.892 [2024-04-24 20:57:14.474589] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.892 [2024-04-24 20:57:14.483213] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.892 [2024-04-24 20:57:14.483905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-04-24 20:57:14.484345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-04-24 20:57:14.484361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.892 [2024-04-24 20:57:14.484373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.892 [2024-04-24 20:57:14.484622] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.892 [2024-04-24 20:57:14.484860] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.892 [2024-04-24 20:57:14.484872] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.892 [2024-04-24 20:57:14.484880] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.892 [2024-04-24 20:57:14.488371] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.892 [2024-04-24 20:57:14.496973] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.892 [2024-04-24 20:57:14.497629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-04-24 20:57:14.498070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-04-24 20:57:14.498088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.892 [2024-04-24 20:57:14.498100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.892 [2024-04-24 20:57:14.498349] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.892 [2024-04-24 20:57:14.498570] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.892 [2024-04-24 20:57:14.498582] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.892 [2024-04-24 20:57:14.498590] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.892 [2024-04-24 20:57:14.502085] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.892 [2024-04-24 20:57:14.510684] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.892 [2024-04-24 20:57:14.511333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-04-24 20:57:14.511751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-04-24 20:57:14.511769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.892 [2024-04-24 20:57:14.511788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.892 [2024-04-24 20:57:14.512037] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.892 [2024-04-24 20:57:14.512259] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.892 [2024-04-24 20:57:14.512270] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.892 [2024-04-24 20:57:14.512278] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:49.892 [2024-04-24 20:57:14.515775] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.892 [2024-04-24 20:57:14.524589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:49.892 [2024-04-24 20:57:14.525260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-04-24 20:57:14.525677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.892 [2024-04-24 20:57:14.525694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:49.892 [2024-04-24 20:57:14.525706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:49.892 [2024-04-24 20:57:14.525971] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:49.892 [2024-04-24 20:57:14.526194] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:49.892 [2024-04-24 20:57:14.526206] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:49.892 [2024-04-24 20:57:14.526215] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.156 [2024-04-24 20:57:14.529712] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.156 [2024-04-24 20:57:14.538336] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.156 [2024-04-24 20:57:14.539042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.539442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.539460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.156 [2024-04-24 20:57:14.539471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.156 [2024-04-24 20:57:14.539720] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.156 [2024-04-24 20:57:14.539954] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.156 [2024-04-24 20:57:14.539966] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.156 [2024-04-24 20:57:14.539975] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.156 [2024-04-24 20:57:14.543472] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.156 [2024-04-24 20:57:14.552073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.156 [2024-04-24 20:57:14.552771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.553168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.553184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.156 [2024-04-24 20:57:14.553196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.156 [2024-04-24 20:57:14.553452] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.156 [2024-04-24 20:57:14.553673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.156 [2024-04-24 20:57:14.553685] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.156 [2024-04-24 20:57:14.553694] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.156 [2024-04-24 20:57:14.557196] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.156 [2024-04-24 20:57:14.565802] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.156 [2024-04-24 20:57:14.566509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.566788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.566807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.156 [2024-04-24 20:57:14.566819] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.156 [2024-04-24 20:57:14.567068] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.156 [2024-04-24 20:57:14.567292] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.156 [2024-04-24 20:57:14.567302] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.156 [2024-04-24 20:57:14.567311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.156 [2024-04-24 20:57:14.570805] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.156 [2024-04-24 20:57:14.579678] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.156 [2024-04-24 20:57:14.580364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.580763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.580782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.156 [2024-04-24 20:57:14.580793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.156 [2024-04-24 20:57:14.581043] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.156 [2024-04-24 20:57:14.581266] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.156 [2024-04-24 20:57:14.581277] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.156 [2024-04-24 20:57:14.581285] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.156 [2024-04-24 20:57:14.584785] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.156 [2024-04-24 20:57:14.593590] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.156 [2024-04-24 20:57:14.594316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.594760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.594779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.156 [2024-04-24 20:57:14.594790] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.156 [2024-04-24 20:57:14.595039] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.156 [2024-04-24 20:57:14.595268] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.156 [2024-04-24 20:57:14.595281] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.156 [2024-04-24 20:57:14.595289] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.156 [2024-04-24 20:57:14.598786] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.156 [2024-04-24 20:57:14.607392] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.156 [2024-04-24 20:57:14.608074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.608474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.608490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.156 [2024-04-24 20:57:14.608502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.156 [2024-04-24 20:57:14.608767] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.156 [2024-04-24 20:57:14.608992] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.156 [2024-04-24 20:57:14.609002] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.156 [2024-04-24 20:57:14.609011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.156 [2024-04-24 20:57:14.612499] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.156 [2024-04-24 20:57:14.621110] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.156 [2024-04-24 20:57:14.621818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.622254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.622270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.156 [2024-04-24 20:57:14.622281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.156 [2024-04-24 20:57:14.622531] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.156 [2024-04-24 20:57:14.622769] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.156 [2024-04-24 20:57:14.622781] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.156 [2024-04-24 20:57:14.622789] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.156 [2024-04-24 20:57:14.626282] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.156 [2024-04-24 20:57:14.634884] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.156 [2024-04-24 20:57:14.635459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.635816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.635829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.156 [2024-04-24 20:57:14.635840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.156 [2024-04-24 20:57:14.636058] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.156 [2024-04-24 20:57:14.636276] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.156 [2024-04-24 20:57:14.636293] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.156 [2024-04-24 20:57:14.636301] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.156 [2024-04-24 20:57:14.639789] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.156 [2024-04-24 20:57:14.648621] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.156 [2024-04-24 20:57:14.649298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.156 [2024-04-24 20:57:14.649745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.649763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.157 [2024-04-24 20:57:14.649775] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.157 [2024-04-24 20:57:14.650024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.157 [2024-04-24 20:57:14.650246] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.157 [2024-04-24 20:57:14.650257] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.157 [2024-04-24 20:57:14.650267] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.157 [2024-04-24 20:57:14.653770] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.157 [2024-04-24 20:57:14.662394] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.157 [2024-04-24 20:57:14.663105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.663526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.663543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.157 [2024-04-24 20:57:14.663554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.157 [2024-04-24 20:57:14.663815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.157 [2024-04-24 20:57:14.664038] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.157 [2024-04-24 20:57:14.664049] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.157 [2024-04-24 20:57:14.664058] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.157 [2024-04-24 20:57:14.667553] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.157 [2024-04-24 20:57:14.676169] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.157 [2024-04-24 20:57:14.676873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.677276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.677293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.157 [2024-04-24 20:57:14.677306] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.157 [2024-04-24 20:57:14.677557] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.157 [2024-04-24 20:57:14.677796] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.157 [2024-04-24 20:57:14.677808] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.157 [2024-04-24 20:57:14.677824] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.157 [2024-04-24 20:57:14.681336] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.157 [2024-04-24 20:57:14.689947] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.157 [2024-04-24 20:57:14.690638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.691051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.691069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.157 [2024-04-24 20:57:14.691080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.157 [2024-04-24 20:57:14.691329] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.157 [2024-04-24 20:57:14.691551] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.157 [2024-04-24 20:57:14.691562] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.157 [2024-04-24 20:57:14.691570] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.157 [2024-04-24 20:57:14.695062] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.157 [2024-04-24 20:57:14.703657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.157 [2024-04-24 20:57:14.704344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.704664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.704682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.157 [2024-04-24 20:57:14.704693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.157 [2024-04-24 20:57:14.704959] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.157 [2024-04-24 20:57:14.705183] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.157 [2024-04-24 20:57:14.705193] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.157 [2024-04-24 20:57:14.705201] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.157 [2024-04-24 20:57:14.708689] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.157 [2024-04-24 20:57:14.717493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.157 [2024-04-24 20:57:14.718116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.718470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.718483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.157 [2024-04-24 20:57:14.718491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.157 [2024-04-24 20:57:14.718711] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.157 [2024-04-24 20:57:14.718943] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.157 [2024-04-24 20:57:14.718956] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.157 [2024-04-24 20:57:14.718963] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.157 [2024-04-24 20:57:14.722448] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.157 [2024-04-24 20:57:14.731249] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.157 [2024-04-24 20:57:14.731983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.732245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.732261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.157 [2024-04-24 20:57:14.732272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.157 [2024-04-24 20:57:14.732521] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.157 [2024-04-24 20:57:14.732756] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.157 [2024-04-24 20:57:14.732767] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.157 [2024-04-24 20:57:14.732775] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.157 [2024-04-24 20:57:14.736273] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.157 [2024-04-24 20:57:14.745080] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.157 [2024-04-24 20:57:14.745775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.746216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.746233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.157 [2024-04-24 20:57:14.746244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.157 [2024-04-24 20:57:14.746494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.157 [2024-04-24 20:57:14.746716] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.157 [2024-04-24 20:57:14.746741] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.157 [2024-04-24 20:57:14.746750] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.157 [2024-04-24 20:57:14.750242] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.157 [2024-04-24 20:57:14.758848] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.157 [2024-04-24 20:57:14.759542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.760002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.760022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.157 [2024-04-24 20:57:14.760034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.157 [2024-04-24 20:57:14.760283] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.157 [2024-04-24 20:57:14.760505] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.157 [2024-04-24 20:57:14.760517] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.157 [2024-04-24 20:57:14.760525] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.157 [2024-04-24 20:57:14.764025] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.157 [2024-04-24 20:57:14.772638] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.157 [2024-04-24 20:57:14.773239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.773679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.157 [2024-04-24 20:57:14.773695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.157 [2024-04-24 20:57:14.773707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.157 [2024-04-24 20:57:14.773971] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.157 [2024-04-24 20:57:14.774195] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.158 [2024-04-24 20:57:14.774206] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.158 [2024-04-24 20:57:14.774214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.158 [2024-04-24 20:57:14.777701] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.158 [2024-04-24 20:57:14.786534] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.158 [2024-04-24 20:57:14.787211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-04-24 20:57:14.787649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.158 [2024-04-24 20:57:14.787666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.158 [2024-04-24 20:57:14.787677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.158 [2024-04-24 20:57:14.787942] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.158 [2024-04-24 20:57:14.788165] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.158 [2024-04-24 20:57:14.788178] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.158 [2024-04-24 20:57:14.788186] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.158 [2024-04-24 20:57:14.791681] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.422 [2024-04-24 20:57:14.800320] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.422 [2024-04-24 20:57:14.801039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.801472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.801489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.422 [2024-04-24 20:57:14.801501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.422 [2024-04-24 20:57:14.801766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.422 [2024-04-24 20:57:14.801989] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.422 [2024-04-24 20:57:14.802003] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.422 [2024-04-24 20:57:14.802011] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.422 [2024-04-24 20:57:14.805505] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.422 [2024-04-24 20:57:14.814117] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.422 [2024-04-24 20:57:14.814816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.815241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.815257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.422 [2024-04-24 20:57:14.815268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.422 [2024-04-24 20:57:14.815514] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.422 [2024-04-24 20:57:14.815748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.422 [2024-04-24 20:57:14.815760] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.422 [2024-04-24 20:57:14.815768] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.422 [2024-04-24 20:57:14.819260] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.422 [2024-04-24 20:57:14.827857] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.422 [2024-04-24 20:57:14.828551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.828951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.828971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.422 [2024-04-24 20:57:14.828983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.422 [2024-04-24 20:57:14.829232] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.422 [2024-04-24 20:57:14.829454] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.422 [2024-04-24 20:57:14.829465] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.422 [2024-04-24 20:57:14.829474] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.422 [2024-04-24 20:57:14.832971] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.422 [2024-04-24 20:57:14.841583] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.422 [2024-04-24 20:57:14.842304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.842698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.842714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.422 [2024-04-24 20:57:14.842742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.422 [2024-04-24 20:57:14.842992] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.422 [2024-04-24 20:57:14.843214] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.422 [2024-04-24 20:57:14.843225] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.422 [2024-04-24 20:57:14.843233] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.422 [2024-04-24 20:57:14.846730] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.422 [2024-04-24 20:57:14.855337] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.422 [2024-04-24 20:57:14.856009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.856455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.856471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.422 [2024-04-24 20:57:14.856483] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.422 [2024-04-24 20:57:14.856746] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.422 [2024-04-24 20:57:14.856970] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.422 [2024-04-24 20:57:14.856981] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.422 [2024-04-24 20:57:14.856989] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.422 [2024-04-24 20:57:14.860478] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.422 [2024-04-24 20:57:14.869100] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.422 [2024-04-24 20:57:14.869780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.870214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.870231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.422 [2024-04-24 20:57:14.870243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.422 [2024-04-24 20:57:14.870492] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.422 [2024-04-24 20:57:14.870714] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.422 [2024-04-24 20:57:14.870738] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.422 [2024-04-24 20:57:14.870747] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.422 [2024-04-24 20:57:14.874236] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.422 [2024-04-24 20:57:14.882873] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.422 [2024-04-24 20:57:14.883583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.883982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.884002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.422 [2024-04-24 20:57:14.884013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.422 [2024-04-24 20:57:14.884263] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.422 [2024-04-24 20:57:14.884484] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.422 [2024-04-24 20:57:14.884496] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.422 [2024-04-24 20:57:14.884504] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.422 [2024-04-24 20:57:14.887996] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.422 [2024-04-24 20:57:14.896628] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.422 [2024-04-24 20:57:14.897329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.897768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.897787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.422 [2024-04-24 20:57:14.897806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.422 [2024-04-24 20:57:14.898054] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.422 [2024-04-24 20:57:14.898277] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.422 [2024-04-24 20:57:14.898289] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.422 [2024-04-24 20:57:14.898297] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.422 [2024-04-24 20:57:14.901816] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.422 [2024-04-24 20:57:14.910466] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.422 [2024-04-24 20:57:14.911088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.911319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.422 [2024-04-24 20:57:14.911331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.422 [2024-04-24 20:57:14.911340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.422 [2024-04-24 20:57:14.911559] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.422 [2024-04-24 20:57:14.911792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.422 [2024-04-24 20:57:14.911805] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.422 [2024-04-24 20:57:14.911813] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.422 [2024-04-24 20:57:14.915307] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.423 [2024-04-24 20:57:14.924335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.423 [2024-04-24 20:57:14.924989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:14.925428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:14.925445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.423 [2024-04-24 20:57:14.925457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.423 [2024-04-24 20:57:14.925706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.423 [2024-04-24 20:57:14.925939] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.423 [2024-04-24 20:57:14.925952] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.423 [2024-04-24 20:57:14.925960] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.423 [2024-04-24 20:57:14.929449] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.423 [2024-04-24 20:57:14.938069] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.423 [2024-04-24 20:57:14.938774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:14.939209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:14.939227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.423 [2024-04-24 20:57:14.939238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.423 [2024-04-24 20:57:14.939494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.423 [2024-04-24 20:57:14.939717] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.423 [2024-04-24 20:57:14.939743] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.423 [2024-04-24 20:57:14.939752] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.423 [2024-04-24 20:57:14.943244] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.423 [2024-04-24 20:57:14.951844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.423 [2024-04-24 20:57:14.952528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:14.952965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:14.952985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.423 [2024-04-24 20:57:14.952997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.423 [2024-04-24 20:57:14.953246] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.423 [2024-04-24 20:57:14.953467] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.423 [2024-04-24 20:57:14.953479] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.423 [2024-04-24 20:57:14.953487] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.423 [2024-04-24 20:57:14.956981] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.423 [2024-04-24 20:57:14.965581] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.423 [2024-04-24 20:57:14.966291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:14.966744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:14.966762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.423 [2024-04-24 20:57:14.966774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.423 [2024-04-24 20:57:14.967022] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.423 [2024-04-24 20:57:14.967245] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.423 [2024-04-24 20:57:14.967256] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.423 [2024-04-24 20:57:14.967264] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.423 [2024-04-24 20:57:14.970752] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.423 [2024-04-24 20:57:14.979370] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.423 [2024-04-24 20:57:14.980078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:14.980514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:14.980531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.423 [2024-04-24 20:57:14.980543] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.423 [2024-04-24 20:57:14.980805] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.423 [2024-04-24 20:57:14.981042] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.423 [2024-04-24 20:57:14.981054] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.423 [2024-04-24 20:57:14.981062] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.423 [2024-04-24 20:57:14.984553] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.423 [2024-04-24 20:57:14.993160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.423 [2024-04-24 20:57:14.993832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:14.994208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:14.994224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.423 [2024-04-24 20:57:14.994236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.423 [2024-04-24 20:57:14.994486] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.423 [2024-04-24 20:57:14.994708] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.423 [2024-04-24 20:57:14.994721] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.423 [2024-04-24 20:57:14.994746] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.423 [2024-04-24 20:57:14.998241] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.423 [2024-04-24 20:57:15.007046] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.423 [2024-04-24 20:57:15.007753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:15.008168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:15.008184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.423 [2024-04-24 20:57:15.008196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.423 [2024-04-24 20:57:15.008445] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.423 [2024-04-24 20:57:15.008667] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.423 [2024-04-24 20:57:15.008680] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.423 [2024-04-24 20:57:15.008688] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.423 [2024-04-24 20:57:15.012192] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.423 [2024-04-24 20:57:15.020806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.423 [2024-04-24 20:57:15.021502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:15.021862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:15.021884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.423 [2024-04-24 20:57:15.021895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.423 [2024-04-24 20:57:15.022144] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.423 [2024-04-24 20:57:15.022369] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.423 [2024-04-24 20:57:15.022387] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.423 [2024-04-24 20:57:15.022395] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.423 [2024-04-24 20:57:15.025897] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.423 [2024-04-24 20:57:15.034700] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.423 [2024-04-24 20:57:15.035400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:15.035717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:15.035749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.423 [2024-04-24 20:57:15.035763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.423 [2024-04-24 20:57:15.036014] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.423 [2024-04-24 20:57:15.036236] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.423 [2024-04-24 20:57:15.036247] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.423 [2024-04-24 20:57:15.036255] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.423 [2024-04-24 20:57:15.039743] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.423 [2024-04-24 20:57:15.048546] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.423 [2024-04-24 20:57:15.049224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:15.049637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.423 [2024-04-24 20:57:15.049654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.424 [2024-04-24 20:57:15.049665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.424 [2024-04-24 20:57:15.049931] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.424 [2024-04-24 20:57:15.050154] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.424 [2024-04-24 20:57:15.050166] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.424 [2024-04-24 20:57:15.050174] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.424 [2024-04-24 20:57:15.053660] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.062285] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.062993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.063408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.063425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.063437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.063686] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.063923] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.063937] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.063952] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.067439] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.076287] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.077018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.077418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.077435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.077447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.077698] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.077936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.077948] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.077956] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.081472] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.090094] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.090777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.091184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.091201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.091213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.091462] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.091684] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.091696] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.091705] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.095208] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.103810] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.104458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.104739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.104758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.104769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.105018] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.105241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.105252] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.105259] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.108763] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.117589] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.118093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.118485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.118500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.118510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.118738] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.118959] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.118972] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.118980] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.122481] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.131319] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.132054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.132490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.132512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.132526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.132791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.133014] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.133026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.133035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.136529] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.145139] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.145753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.146167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.146184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.146195] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.146443] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.146667] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.146680] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.146688] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.150204] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.159071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.159692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.160072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.160086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.160094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.160314] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.160531] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.160542] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.160551] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.164043] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.172875] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.173440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.173818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.173832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.173840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.174059] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.174277] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.174289] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.174296] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.177790] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.186629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.187234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.187613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.187626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.187634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.187860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.188078] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.188088] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.188096] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.191708] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.200545] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.201247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.201686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.201703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.201715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.201976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.202199] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.202211] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.202219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.205705] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.214309] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.215045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.215440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.215457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.215468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.215717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.215953] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.215964] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.215973] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.219460] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.228071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.228778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.229217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.229234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.229245] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.229494] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.229716] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.229741] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.229750] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.233240] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.241849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.242547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.242937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.242957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.242968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.243217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.243439] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.243452] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.243460] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.246958] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.255771] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.256383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.256743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.256757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.256766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.256984] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.257201] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.257213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.257221] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.260704] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.269523] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.270228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.270628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.270645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.270657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.270918] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.687 [2024-04-24 20:57:15.271140] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.687 [2024-04-24 20:57:15.271153] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.687 [2024-04-24 20:57:15.271161] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.687 [2024-04-24 20:57:15.274651] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.687 [2024-04-24 20:57:15.283288] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.687 [2024-04-24 20:57:15.283744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.284098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.687 [2024-04-24 20:57:15.284111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.687 [2024-04-24 20:57:15.284127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.687 [2024-04-24 20:57:15.284346] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.688 [2024-04-24 20:57:15.284564] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.688 [2024-04-24 20:57:15.284576] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.688 [2024-04-24 20:57:15.284584] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.688 [2024-04-24 20:57:15.288081] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.688 [2024-04-24 20:57:15.297114] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.688 [2024-04-24 20:57:15.297681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.688 [2024-04-24 20:57:15.298011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.688 [2024-04-24 20:57:15.298025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.688 [2024-04-24 20:57:15.298033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.688 [2024-04-24 20:57:15.298250] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.688 [2024-04-24 20:57:15.298466] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.688 [2024-04-24 20:57:15.298476] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.688 [2024-04-24 20:57:15.298484] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.688 [2024-04-24 20:57:15.301968] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.688 [2024-04-24 20:57:15.310991] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.688 [2024-04-24 20:57:15.311591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.688 [2024-04-24 20:57:15.311960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.688 [2024-04-24 20:57:15.311974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.688 [2024-04-24 20:57:15.311982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.688 [2024-04-24 20:57:15.312199] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.688 [2024-04-24 20:57:15.312420] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.688 [2024-04-24 20:57:15.312431] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.688 [2024-04-24 20:57:15.312439] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.688 [2024-04-24 20:57:15.315923] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.688 [2024-04-24 20:57:15.324744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.688 [2024-04-24 20:57:15.325349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.688 [2024-04-24 20:57:15.325732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.688 [2024-04-24 20:57:15.325747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.688 [2024-04-24 20:57:15.325756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.950 [2024-04-24 20:57:15.325983] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.950 [2024-04-24 20:57:15.326204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.950 [2024-04-24 20:57:15.326221] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.950 [2024-04-24 20:57:15.326229] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.950 [2024-04-24 20:57:15.329720] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.950 [2024-04-24 20:57:15.338541] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.950 [2024-04-24 20:57:15.339152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.950 [2024-04-24 20:57:15.339517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.950 [2024-04-24 20:57:15.339530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.950 [2024-04-24 20:57:15.339538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.950 [2024-04-24 20:57:15.339764] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.950 [2024-04-24 20:57:15.339982] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.950 [2024-04-24 20:57:15.339993] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.950 [2024-04-24 20:57:15.340001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.950 [2024-04-24 20:57:15.343482] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.950 [2024-04-24 20:57:15.352359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.950 [2024-04-24 20:57:15.353034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.950 [2024-04-24 20:57:15.353426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.950 [2024-04-24 20:57:15.353444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.950 [2024-04-24 20:57:15.353456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.950 [2024-04-24 20:57:15.353705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.950 [2024-04-24 20:57:15.353936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.950 [2024-04-24 20:57:15.353948] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.950 [2024-04-24 20:57:15.353957] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.950 [2024-04-24 20:57:15.357448] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.950 [2024-04-24 20:57:15.366273] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.950 [2024-04-24 20:57:15.366865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.950 [2024-04-24 20:57:15.367301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.950 [2024-04-24 20:57:15.367318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.950 [2024-04-24 20:57:15.367330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.950 [2024-04-24 20:57:15.367585] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.950 [2024-04-24 20:57:15.367821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.951 [2024-04-24 20:57:15.367832] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.951 [2024-04-24 20:57:15.367840] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.951 [2024-04-24 20:57:15.371337] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.951 [2024-04-24 20:57:15.380182] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.951 [2024-04-24 20:57:15.380839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.381288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.381305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.951 [2024-04-24 20:57:15.381317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.951 [2024-04-24 20:57:15.381566] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.951 [2024-04-24 20:57:15.381801] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.951 [2024-04-24 20:57:15.381814] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.951 [2024-04-24 20:57:15.381822] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.951 [2024-04-24 20:57:15.385318] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.951 [2024-04-24 20:57:15.393943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.951 [2024-04-24 20:57:15.394555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.394804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.394818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.951 [2024-04-24 20:57:15.394827] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.951 [2024-04-24 20:57:15.395045] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.951 [2024-04-24 20:57:15.395264] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.951 [2024-04-24 20:57:15.395277] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.951 [2024-04-24 20:57:15.395284] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.951 [2024-04-24 20:57:15.398773] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.951 [2024-04-24 20:57:15.407805] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.951 [2024-04-24 20:57:15.408369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.408622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.408635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.951 [2024-04-24 20:57:15.408643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.951 [2024-04-24 20:57:15.408866] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.951 [2024-04-24 20:57:15.409093] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.951 [2024-04-24 20:57:15.409105] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.951 [2024-04-24 20:57:15.409113] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.951 [2024-04-24 20:57:15.412591] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.951 [2024-04-24 20:57:15.421623] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.951 [2024-04-24 20:57:15.422103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.422467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.422480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.951 [2024-04-24 20:57:15.422489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.951 [2024-04-24 20:57:15.422706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.951 [2024-04-24 20:57:15.422931] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.951 [2024-04-24 20:57:15.422941] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.951 [2024-04-24 20:57:15.422949] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.951 [2024-04-24 20:57:15.426439] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.951 [2024-04-24 20:57:15.435473] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.951 [2024-04-24 20:57:15.436155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.436584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.436601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.951 [2024-04-24 20:57:15.436613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.951 [2024-04-24 20:57:15.436867] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.951 [2024-04-24 20:57:15.437091] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.951 [2024-04-24 20:57:15.437102] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.951 [2024-04-24 20:57:15.437110] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.951 [2024-04-24 20:57:15.440601] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.951 [2024-04-24 20:57:15.449226] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.951 [2024-04-24 20:57:15.449841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.450094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.450111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.951 [2024-04-24 20:57:15.450121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.951 [2024-04-24 20:57:15.450363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.951 [2024-04-24 20:57:15.450585] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.951 [2024-04-24 20:57:15.450595] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.951 [2024-04-24 20:57:15.450609] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.951 [2024-04-24 20:57:15.454122] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.951 [2024-04-24 20:57:15.462958] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.951 [2024-04-24 20:57:15.463631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.464011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.464031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.951 [2024-04-24 20:57:15.464041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.951 [2024-04-24 20:57:15.464282] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.951 [2024-04-24 20:57:15.464502] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.951 [2024-04-24 20:57:15.464512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.951 [2024-04-24 20:57:15.464520] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.951 [2024-04-24 20:57:15.468012] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.951 [2024-04-24 20:57:15.476835] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.951 [2024-04-24 20:57:15.477377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.477739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.477752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.951 [2024-04-24 20:57:15.477760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.951 [2024-04-24 20:57:15.477978] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.951 [2024-04-24 20:57:15.478193] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.951 [2024-04-24 20:57:15.478204] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.951 [2024-04-24 20:57:15.478211] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.951 [2024-04-24 20:57:15.481713] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.951 [2024-04-24 20:57:15.490733] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.951 [2024-04-24 20:57:15.491297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.491500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.491510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.951 [2024-04-24 20:57:15.491518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.951 [2024-04-24 20:57:15.491740] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.951 [2024-04-24 20:57:15.491957] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.951 [2024-04-24 20:57:15.491967] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.951 [2024-04-24 20:57:15.491979] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.951 [2024-04-24 20:57:15.495452] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.951 [2024-04-24 20:57:15.504449] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.951 [2024-04-24 20:57:15.504980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.505308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.505321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.951 [2024-04-24 20:57:15.505328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.951 [2024-04-24 20:57:15.505544] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.951 [2024-04-24 20:57:15.505764] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.951 [2024-04-24 20:57:15.505775] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.951 [2024-04-24 20:57:15.505782] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.951 [2024-04-24 20:57:15.509250] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.951 [2024-04-24 20:57:15.518254] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.951 [2024-04-24 20:57:15.518800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.951 [2024-04-24 20:57:15.519107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.952 [2024-04-24 20:57:15.519119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.952 [2024-04-24 20:57:15.519126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.952 [2024-04-24 20:57:15.519341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.952 [2024-04-24 20:57:15.519556] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.952 [2024-04-24 20:57:15.519564] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.952 [2024-04-24 20:57:15.519571] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.952 [2024-04-24 20:57:15.523044] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.952 [2024-04-24 20:57:15.532043] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.952 [2024-04-24 20:57:15.532605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.952 [2024-04-24 20:57:15.532811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.952 [2024-04-24 20:57:15.532825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.952 [2024-04-24 20:57:15.532833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.952 [2024-04-24 20:57:15.533049] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.952 [2024-04-24 20:57:15.533265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.952 [2024-04-24 20:57:15.533274] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.952 [2024-04-24 20:57:15.533280] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.952 [2024-04-24 20:57:15.536757] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.952 [2024-04-24 20:57:15.545765] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.952 [2024-04-24 20:57:15.546382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.952 [2024-04-24 20:57:15.546721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.952 [2024-04-24 20:57:15.546742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.952 [2024-04-24 20:57:15.546752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.952 [2024-04-24 20:57:15.546986] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.952 [2024-04-24 20:57:15.547204] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.952 [2024-04-24 20:57:15.547213] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.952 [2024-04-24 20:57:15.547221] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.952 [2024-04-24 20:57:15.550690] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.952 [2024-04-24 20:57:15.559483] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.952 [2024-04-24 20:57:15.560065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.952 [2024-04-24 20:57:15.560450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.952 [2024-04-24 20:57:15.560461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.952 [2024-04-24 20:57:15.560468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.952 [2024-04-24 20:57:15.560683] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.952 [2024-04-24 20:57:15.560903] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.952 [2024-04-24 20:57:15.560913] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.952 [2024-04-24 20:57:15.560920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.952 [2024-04-24 20:57:15.564388] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.952 [2024-04-24 20:57:15.573381] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.952 [2024-04-24 20:57:15.573910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.952 [2024-04-24 20:57:15.574250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.952 [2024-04-24 20:57:15.574260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.952 [2024-04-24 20:57:15.574268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.952 [2024-04-24 20:57:15.574482] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.952 [2024-04-24 20:57:15.574697] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.952 [2024-04-24 20:57:15.574706] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.952 [2024-04-24 20:57:15.574713] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:50.952 [2024-04-24 20:57:15.578185] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:50.952 [2024-04-24 20:57:15.587193] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:50.952 [2024-04-24 20:57:15.587863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.952 [2024-04-24 20:57:15.588209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.952 [2024-04-24 20:57:15.588223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:50.952 [2024-04-24 20:57:15.588232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:50.952 [2024-04-24 20:57:15.588465] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:50.952 [2024-04-24 20:57:15.588683] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:50.952 [2024-04-24 20:57:15.588693] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:50.952 [2024-04-24 20:57:15.588700] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.214 [2024-04-24 20:57:15.592177] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.214 [2024-04-24 20:57:15.601059] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.214 [2024-04-24 20:57:15.601597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.214 [2024-04-24 20:57:15.601943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.214 [2024-04-24 20:57:15.601955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.214 [2024-04-24 20:57:15.601962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.214 [2024-04-24 20:57:15.602178] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.214 [2024-04-24 20:57:15.602393] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.214 [2024-04-24 20:57:15.602401] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.214 [2024-04-24 20:57:15.602408] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.214 [2024-04-24 20:57:15.605881] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.214 [2024-04-24 20:57:15.614870] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.214 [2024-04-24 20:57:15.615429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.214 [2024-04-24 20:57:15.615735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.214 [2024-04-24 20:57:15.615746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.214 [2024-04-24 20:57:15.615753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.214 [2024-04-24 20:57:15.615968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.214 [2024-04-24 20:57:15.616182] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.214 [2024-04-24 20:57:15.616190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.214 [2024-04-24 20:57:15.616197] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.214 [2024-04-24 20:57:15.619662] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.214 [2024-04-24 20:57:15.628659] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.214 [2024-04-24 20:57:15.629182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.629520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.629531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.215 [2024-04-24 20:57:15.629538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.215 [2024-04-24 20:57:15.629758] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.215 [2024-04-24 20:57:15.629973] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.215 [2024-04-24 20:57:15.629982] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.215 [2024-04-24 20:57:15.629989] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.215 [2024-04-24 20:57:15.633457] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.215 [2024-04-24 20:57:15.642458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.215 [2024-04-24 20:57:15.643099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.643481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.643495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.215 [2024-04-24 20:57:15.643505] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.215 [2024-04-24 20:57:15.643745] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.215 [2024-04-24 20:57:15.643964] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.215 [2024-04-24 20:57:15.643973] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.215 [2024-04-24 20:57:15.643981] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.215 [2024-04-24 20:57:15.647452] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.215 [2024-04-24 20:57:15.656250] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.215 [2024-04-24 20:57:15.656806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.657033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.657045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.215 [2024-04-24 20:57:15.657053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.215 [2024-04-24 20:57:15.657268] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.215 [2024-04-24 20:57:15.657484] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.215 [2024-04-24 20:57:15.657492] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.215 [2024-04-24 20:57:15.657499] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.215 [2024-04-24 20:57:15.660969] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.215 [2024-04-24 20:57:15.669971] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.215 [2024-04-24 20:57:15.670533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.670811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.670823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.215 [2024-04-24 20:57:15.670835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.215 [2024-04-24 20:57:15.671050] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.215 [2024-04-24 20:57:15.671265] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.215 [2024-04-24 20:57:15.671273] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.215 [2024-04-24 20:57:15.671280] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.215 [2024-04-24 20:57:15.674747] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.215 [2024-04-24 20:57:15.683757] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.215 [2024-04-24 20:57:15.684374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.684716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.684737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.215 [2024-04-24 20:57:15.684747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.215 [2024-04-24 20:57:15.684980] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.215 [2024-04-24 20:57:15.685198] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.215 [2024-04-24 20:57:15.685207] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.215 [2024-04-24 20:57:15.685214] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.215 [2024-04-24 20:57:15.688683] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.215 [2024-04-24 20:57:15.697478] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.215 [2024-04-24 20:57:15.698016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.698356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.698366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.215 [2024-04-24 20:57:15.698374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.215 [2024-04-24 20:57:15.698589] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.215 [2024-04-24 20:57:15.698809] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.215 [2024-04-24 20:57:15.698819] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.215 [2024-04-24 20:57:15.698826] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.215 [2024-04-24 20:57:15.702291] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.215 [2024-04-24 20:57:15.711287] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.215 [2024-04-24 20:57:15.711831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.712137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.712147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.215 [2024-04-24 20:57:15.712159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.215 [2024-04-24 20:57:15.712374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.215 [2024-04-24 20:57:15.712588] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.215 [2024-04-24 20:57:15.712597] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.215 [2024-04-24 20:57:15.712604] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.215 [2024-04-24 20:57:15.716076] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.215 [2024-04-24 20:57:15.725073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.215 [2024-04-24 20:57:15.725596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.725790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.725803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.215 [2024-04-24 20:57:15.725810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.215 [2024-04-24 20:57:15.726025] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.215 [2024-04-24 20:57:15.726241] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.215 [2024-04-24 20:57:15.726250] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.215 [2024-04-24 20:57:15.726257] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.215 [2024-04-24 20:57:15.729723] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.215 [2024-04-24 20:57:15.738932] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.215 [2024-04-24 20:57:15.739497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.739806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.739818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.215 [2024-04-24 20:57:15.739825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.215 [2024-04-24 20:57:15.740041] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.215 [2024-04-24 20:57:15.740256] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.215 [2024-04-24 20:57:15.740265] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.215 [2024-04-24 20:57:15.740272] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.215 [2024-04-24 20:57:15.743742] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.215 [2024-04-24 20:57:15.752736] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.215 [2024-04-24 20:57:15.753289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.753589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.215 [2024-04-24 20:57:15.753599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.215 [2024-04-24 20:57:15.753607] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.215 [2024-04-24 20:57:15.753829] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.215 [2024-04-24 20:57:15.754044] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.216 [2024-04-24 20:57:15.754052] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.216 [2024-04-24 20:57:15.754059] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.216 [2024-04-24 20:57:15.757523] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.216 [2024-04-24 20:57:15.766517] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.216 [2024-04-24 20:57:15.767206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.767553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.767567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.216 [2024-04-24 20:57:15.767577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.216 [2024-04-24 20:57:15.767816] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.216 [2024-04-24 20:57:15.768035] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.216 [2024-04-24 20:57:15.768044] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.216 [2024-04-24 20:57:15.768052] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.216 [2024-04-24 20:57:15.771521] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.216 [2024-04-24 20:57:15.780324] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.216 [2024-04-24 20:57:15.780854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.781240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.781255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.216 [2024-04-24 20:57:15.781265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.216 [2024-04-24 20:57:15.781498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.216 [2024-04-24 20:57:15.781716] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.216 [2024-04-24 20:57:15.781733] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.216 [2024-04-24 20:57:15.781741] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.216 [2024-04-24 20:57:15.785211] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.216 [2024-04-24 20:57:15.794206] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.216 [2024-04-24 20:57:15.794774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.795129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.795140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.216 [2024-04-24 20:57:15.795148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.216 [2024-04-24 20:57:15.795363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.216 [2024-04-24 20:57:15.795582] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.216 [2024-04-24 20:57:15.795591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.216 [2024-04-24 20:57:15.795598] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.216 [2024-04-24 20:57:15.799073] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.216 [2024-04-24 20:57:15.808071] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.216 [2024-04-24 20:57:15.808625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.808979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.808991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.216 [2024-04-24 20:57:15.808999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.216 [2024-04-24 20:57:15.809213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.216 [2024-04-24 20:57:15.809427] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.216 [2024-04-24 20:57:15.809435] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.216 [2024-04-24 20:57:15.809443] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.216 [2024-04-24 20:57:15.812908] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.216 [2024-04-24 20:57:15.821903] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.216 [2024-04-24 20:57:15.822462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.822799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.822811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.216 [2024-04-24 20:57:15.822818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.216 [2024-04-24 20:57:15.823032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.216 [2024-04-24 20:57:15.823246] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.216 [2024-04-24 20:57:15.823255] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.216 [2024-04-24 20:57:15.823262] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.216 [2024-04-24 20:57:15.826730] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.216 [2024-04-24 20:57:15.835722] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.216 [2024-04-24 20:57:15.836284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.836623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.836633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.216 [2024-04-24 20:57:15.836641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.216 [2024-04-24 20:57:15.836860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.216 [2024-04-24 20:57:15.837076] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.216 [2024-04-24 20:57:15.837089] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.216 [2024-04-24 20:57:15.837096] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.216 [2024-04-24 20:57:15.840559] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.216 [2024-04-24 20:57:15.849560] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.216 [2024-04-24 20:57:15.850237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.850577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.216 [2024-04-24 20:57:15.850591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.216 [2024-04-24 20:57:15.850601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.216 [2024-04-24 20:57:15.850839] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.216 [2024-04-24 20:57:15.851058] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.216 [2024-04-24 20:57:15.851067] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.216 [2024-04-24 20:57:15.851074] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.479 [2024-04-24 20:57:15.854546] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.479 [2024-04-24 20:57:15.863344] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.479 [2024-04-24 20:57:15.863877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.479 [2024-04-24 20:57:15.864215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.479 [2024-04-24 20:57:15.864226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.479 [2024-04-24 20:57:15.864234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.479 [2024-04-24 20:57:15.864449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.479 [2024-04-24 20:57:15.864664] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.479 [2024-04-24 20:57:15.864673] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.479 [2024-04-24 20:57:15.864680] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.479 [2024-04-24 20:57:15.868151] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.479 [2024-04-24 20:57:15.877154] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.479 [2024-04-24 20:57:15.877670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.479 [2024-04-24 20:57:15.878011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.479 [2024-04-24 20:57:15.878022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.479 [2024-04-24 20:57:15.878030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.479 [2024-04-24 20:57:15.878245] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.479 [2024-04-24 20:57:15.878459] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.479 [2024-04-24 20:57:15.878468] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.479 [2024-04-24 20:57:15.878479] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.479 [2024-04-24 20:57:15.881961] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.479 [2024-04-24 20:57:15.890951] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.479 [2024-04-24 20:57:15.891471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.479 [2024-04-24 20:57:15.891801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.479 [2024-04-24 20:57:15.891812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.479 [2024-04-24 20:57:15.891820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.479 [2024-04-24 20:57:15.892034] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.479 [2024-04-24 20:57:15.892248] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.479 [2024-04-24 20:57:15.892256] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.479 [2024-04-24 20:57:15.892263] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.479 [2024-04-24 20:57:15.895732] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.479 [2024-04-24 20:57:15.904731] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.479 [2024-04-24 20:57:15.905293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.479 [2024-04-24 20:57:15.905597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.479 [2024-04-24 20:57:15.905607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.479 [2024-04-24 20:57:15.905614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.479 [2024-04-24 20:57:15.905834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.479 [2024-04-24 20:57:15.906049] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.479 [2024-04-24 20:57:15.906057] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.479 [2024-04-24 20:57:15.906064] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.479 [2024-04-24 20:57:15.909526] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.479 [2024-04-24 20:57:15.918532] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.479 [2024-04-24 20:57:15.919107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.479 [2024-04-24 20:57:15.919714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:15.919737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.480 [2024-04-24 20:57:15.919746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.480 [2024-04-24 20:57:15.919964] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.480 [2024-04-24 20:57:15.920181] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.480 [2024-04-24 20:57:15.920190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.480 [2024-04-24 20:57:15.920197] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.480 [2024-04-24 20:57:15.923693] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.480 [2024-04-24 20:57:15.932284] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.480 [2024-04-24 20:57:15.932802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:15.933145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:15.933156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.480 [2024-04-24 20:57:15.933163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.480 [2024-04-24 20:57:15.933379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.480 [2024-04-24 20:57:15.933593] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.480 [2024-04-24 20:57:15.933602] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.480 [2024-04-24 20:57:15.933609] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.480 [2024-04-24 20:57:15.937079] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.480 [2024-04-24 20:57:15.946073] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.480 [2024-04-24 20:57:15.946635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:15.946962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:15.946973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.480 [2024-04-24 20:57:15.946981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.480 [2024-04-24 20:57:15.947195] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.480 [2024-04-24 20:57:15.947409] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.480 [2024-04-24 20:57:15.947419] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.480 [2024-04-24 20:57:15.947425] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.480 [2024-04-24 20:57:15.950894] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.480 [2024-04-24 20:57:15.959890] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.480 [2024-04-24 20:57:15.960447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:15.960797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:15.960808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.480 [2024-04-24 20:57:15.960815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.480 [2024-04-24 20:57:15.961029] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.480 [2024-04-24 20:57:15.961243] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.480 [2024-04-24 20:57:15.961254] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.480 [2024-04-24 20:57:15.961261] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.480 [2024-04-24 20:57:15.964731] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.480 [2024-04-24 20:57:15.973728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.480 [2024-04-24 20:57:15.974248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:15.974554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:15.974564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.480 [2024-04-24 20:57:15.974571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.480 [2024-04-24 20:57:15.974790] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.480 [2024-04-24 20:57:15.975004] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.480 [2024-04-24 20:57:15.975012] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.480 [2024-04-24 20:57:15.975020] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.480 [2024-04-24 20:57:15.978482] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.480 [2024-04-24 20:57:15.987489] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.480 [2024-04-24 20:57:15.988018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:15.988364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:15.988375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.480 [2024-04-24 20:57:15.988382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.480 [2024-04-24 20:57:15.988596] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.480 [2024-04-24 20:57:15.988816] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.480 [2024-04-24 20:57:15.988824] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.480 [2024-04-24 20:57:15.988831] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.480 [2024-04-24 20:57:15.992294] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.480 [2024-04-24 20:57:16.001281] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.480 [2024-04-24 20:57:16.001839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:16.002142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:16.002153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.480 [2024-04-24 20:57:16.002161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.480 [2024-04-24 20:57:16.002374] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.480 [2024-04-24 20:57:16.002589] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.480 [2024-04-24 20:57:16.002599] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.480 [2024-04-24 20:57:16.002605] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.480 [2024-04-24 20:57:16.006073] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.480 [2024-04-24 20:57:16.015068] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.480 [2024-04-24 20:57:16.015562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:16.015952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.480 [2024-04-24 20:57:16.015967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.480 [2024-04-24 20:57:16.015977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.480 [2024-04-24 20:57:16.016209] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.480 [2024-04-24 20:57:16.016428] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.480 [2024-04-24 20:57:16.016436] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.481 [2024-04-24 20:57:16.016443] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.481 [2024-04-24 20:57:16.019915] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.481 [2024-04-24 20:57:16.028907] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.481 [2024-04-24 20:57:16.029433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.029777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.029788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.481 [2024-04-24 20:57:16.029796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.481 [2024-04-24 20:57:16.030010] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.481 [2024-04-24 20:57:16.030225] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.481 [2024-04-24 20:57:16.030234] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.481 [2024-04-24 20:57:16.030241] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.481 [2024-04-24 20:57:16.033706] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.481 [2024-04-24 20:57:16.042698] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.481 [2024-04-24 20:57:16.043321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.043703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.043717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.481 [2024-04-24 20:57:16.043734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.481 [2024-04-24 20:57:16.043968] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.481 [2024-04-24 20:57:16.044185] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.481 [2024-04-24 20:57:16.044194] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.481 [2024-04-24 20:57:16.044202] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.481 [2024-04-24 20:57:16.047669] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.481 [2024-04-24 20:57:16.056458] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.481 [2024-04-24 20:57:16.057070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.057440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.057463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.481 [2024-04-24 20:57:16.057472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.481 [2024-04-24 20:57:16.057705] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.481 [2024-04-24 20:57:16.057936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.481 [2024-04-24 20:57:16.057947] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.481 [2024-04-24 20:57:16.057954] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.481 [2024-04-24 20:57:16.061422] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.481 [2024-04-24 20:57:16.070209] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.481 [2024-04-24 20:57:16.070839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.071183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.071197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.481 [2024-04-24 20:57:16.071206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.481 [2024-04-24 20:57:16.071439] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.481 [2024-04-24 20:57:16.071657] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.481 [2024-04-24 20:57:16.071667] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.481 [2024-04-24 20:57:16.071674] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.481 [2024-04-24 20:57:16.075360] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.481 [2024-04-24 20:57:16.083959] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.481 [2024-04-24 20:57:16.084624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.084972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.084987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.481 [2024-04-24 20:57:16.084996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.481 [2024-04-24 20:57:16.085229] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.481 [2024-04-24 20:57:16.085447] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.481 [2024-04-24 20:57:16.085456] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.481 [2024-04-24 20:57:16.085463] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.481 [2024-04-24 20:57:16.088934] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.481 [2024-04-24 20:57:16.097719] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.481 [2024-04-24 20:57:16.098389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.098774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.098789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.481 [2024-04-24 20:57:16.098802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.481 [2024-04-24 20:57:16.099036] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.481 [2024-04-24 20:57:16.099253] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.481 [2024-04-24 20:57:16.099262] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.481 [2024-04-24 20:57:16.099269] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.481 [2024-04-24 20:57:16.102742] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.481 [2024-04-24 20:57:16.111528] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.481 [2024-04-24 20:57:16.112118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.112501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.481 [2024-04-24 20:57:16.112514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.481 [2024-04-24 20:57:16.112524] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.481 [2024-04-24 20:57:16.112766] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.481 [2024-04-24 20:57:16.112985] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.481 [2024-04-24 20:57:16.112994] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.481 [2024-04-24 20:57:16.113001] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.481 [2024-04-24 20:57:16.116469] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.744 [2024-04-24 20:57:16.125259] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.744 [2024-04-24 20:57:16.125933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.744 [2024-04-24 20:57:16.126311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.744 [2024-04-24 20:57:16.126325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.744 [2024-04-24 20:57:16.126335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.744 [2024-04-24 20:57:16.126569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.744 [2024-04-24 20:57:16.126795] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.744 [2024-04-24 20:57:16.126804] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.744 [2024-04-24 20:57:16.126811] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.745 [2024-04-24 20:57:16.130281] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.745 [2024-04-24 20:57:16.139063] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.745 [2024-04-24 20:57:16.139718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.140093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.140107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.745 [2024-04-24 20:57:16.140117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.745 [2024-04-24 20:57:16.140355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.745 [2024-04-24 20:57:16.140575] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.745 [2024-04-24 20:57:16.140584] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.745 [2024-04-24 20:57:16.140591] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.745 [2024-04-24 20:57:16.144065] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.745 [2024-04-24 20:57:16.152854] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.745 [2024-04-24 20:57:16.153381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.153712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.153733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.745 [2024-04-24 20:57:16.153743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.745 [2024-04-24 20:57:16.153976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.745 [2024-04-24 20:57:16.154194] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.745 [2024-04-24 20:57:16.154203] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.745 [2024-04-24 20:57:16.154210] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.745 [2024-04-24 20:57:16.157678] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.745 [2024-04-24 20:57:16.166674] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.745 [2024-04-24 20:57:16.167318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.167652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.167667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.745 [2024-04-24 20:57:16.167676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.745 [2024-04-24 20:57:16.167918] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.745 [2024-04-24 20:57:16.168136] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.745 [2024-04-24 20:57:16.168145] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.745 [2024-04-24 20:57:16.168152] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.745 [2024-04-24 20:57:16.171623] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.745 [2024-04-24 20:57:16.180420] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.745 [2024-04-24 20:57:16.181051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.181392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.181407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.745 [2024-04-24 20:57:16.181417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.745 [2024-04-24 20:57:16.181651] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.745 [2024-04-24 20:57:16.181890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.745 [2024-04-24 20:57:16.181900] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.745 [2024-04-24 20:57:16.181908] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.745 [2024-04-24 20:57:16.185378] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.745 [2024-04-24 20:57:16.194171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.745 [2024-04-24 20:57:16.194833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.195146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.195160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.745 [2024-04-24 20:57:16.195169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.745 [2024-04-24 20:57:16.195402] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.745 [2024-04-24 20:57:16.195619] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.745 [2024-04-24 20:57:16.195628] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.745 [2024-04-24 20:57:16.195636] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.745 [2024-04-24 20:57:16.199112] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.745 [2024-04-24 20:57:16.207900] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.745 [2024-04-24 20:57:16.208558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.208946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.208962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.745 [2024-04-24 20:57:16.208971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.745 [2024-04-24 20:57:16.209204] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.745 [2024-04-24 20:57:16.209421] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.745 [2024-04-24 20:57:16.209430] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.745 [2024-04-24 20:57:16.209437] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.745 [2024-04-24 20:57:16.212909] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.745 [2024-04-24 20:57:16.221688] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.745 [2024-04-24 20:57:16.222354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.222692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.222706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.745 [2024-04-24 20:57:16.222715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.745 [2024-04-24 20:57:16.222955] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.745 [2024-04-24 20:57:16.223174] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.745 [2024-04-24 20:57:16.223187] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.745 [2024-04-24 20:57:16.223195] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.745 [2024-04-24 20:57:16.226663] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.745 [2024-04-24 20:57:16.235450] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.745 [2024-04-24 20:57:16.236116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.236497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.745 [2024-04-24 20:57:16.236510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.746 [2024-04-24 20:57:16.236520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.746 [2024-04-24 20:57:16.236760] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.746 [2024-04-24 20:57:16.236979] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.746 [2024-04-24 20:57:16.236990] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.746 [2024-04-24 20:57:16.236997] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.746 [2024-04-24 20:57:16.240465] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.746 [2024-04-24 20:57:16.249341] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.746 [2024-04-24 20:57:16.249995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.250335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.250348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.746 [2024-04-24 20:57:16.250358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.746 [2024-04-24 20:57:16.250591] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.746 [2024-04-24 20:57:16.250816] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.746 [2024-04-24 20:57:16.250825] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.746 [2024-04-24 20:57:16.250833] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.746 [2024-04-24 20:57:16.254302] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.746 [2024-04-24 20:57:16.263088] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.746 [2024-04-24 20:57:16.263781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.264125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.264139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.746 [2024-04-24 20:57:16.264148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.746 [2024-04-24 20:57:16.264381] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.746 [2024-04-24 20:57:16.264599] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.746 [2024-04-24 20:57:16.264608] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.746 [2024-04-24 20:57:16.264620] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.746 [2024-04-24 20:57:16.268098] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.746 [2024-04-24 20:57:16.276887] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.746 [2024-04-24 20:57:16.277564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.277819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.277833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.746 [2024-04-24 20:57:16.277843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.746 [2024-04-24 20:57:16.278077] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.746 [2024-04-24 20:57:16.278295] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.746 [2024-04-24 20:57:16.278303] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.746 [2024-04-24 20:57:16.278311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.746 [2024-04-24 20:57:16.281792] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.746 [2024-04-24 20:57:16.290782] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.746 [2024-04-24 20:57:16.291454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.291740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.291756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.746 [2024-04-24 20:57:16.291765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.746 [2024-04-24 20:57:16.292000] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.746 [2024-04-24 20:57:16.292217] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.746 [2024-04-24 20:57:16.292225] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.746 [2024-04-24 20:57:16.292233] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.746 [2024-04-24 20:57:16.295702] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.746 [2024-04-24 20:57:16.304491] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.746 [2024-04-24 20:57:16.305120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.305465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.305479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.746 [2024-04-24 20:57:16.305489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.746 [2024-04-24 20:57:16.305722] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.746 [2024-04-24 20:57:16.305950] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.746 [2024-04-24 20:57:16.305959] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.746 [2024-04-24 20:57:16.305967] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.746 [2024-04-24 20:57:16.309439] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.746 [2024-04-24 20:57:16.318226] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.746 [2024-04-24 20:57:16.318825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.319047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.319060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.746 [2024-04-24 20:57:16.319069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.746 [2024-04-24 20:57:16.319302] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.746 [2024-04-24 20:57:16.319521] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.746 [2024-04-24 20:57:16.319530] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.746 [2024-04-24 20:57:16.319537] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.746 [2024-04-24 20:57:16.323014] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.746 [2024-04-24 20:57:16.332007] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.746 [2024-04-24 20:57:16.332662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.333045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.333060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.746 [2024-04-24 20:57:16.333070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.746 [2024-04-24 20:57:16.333303] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.746 [2024-04-24 20:57:16.333520] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.746 [2024-04-24 20:57:16.333529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.746 [2024-04-24 20:57:16.333536] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.746 [2024-04-24 20:57:16.337008] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.746 [2024-04-24 20:57:16.345797] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.746 [2024-04-24 20:57:16.346366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.346710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.746 [2024-04-24 20:57:16.346721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.747 [2024-04-24 20:57:16.346735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.747 [2024-04-24 20:57:16.346950] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.747 [2024-04-24 20:57:16.347165] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.747 [2024-04-24 20:57:16.347173] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.747 [2024-04-24 20:57:16.347180] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.747 [2024-04-24 20:57:16.350641] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.747 [2024-04-24 20:57:16.359627] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.747 [2024-04-24 20:57:16.360293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.747 [2024-04-24 20:57:16.360626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.747 [2024-04-24 20:57:16.360639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.747 [2024-04-24 20:57:16.360649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.747 [2024-04-24 20:57:16.360890] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.747 [2024-04-24 20:57:16.361109] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.747 [2024-04-24 20:57:16.361117] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.747 [2024-04-24 20:57:16.361125] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.747 [2024-04-24 20:57:16.364591] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:51.747 [2024-04-24 20:57:16.373397] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:51.747 [2024-04-24 20:57:16.374082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.747 [2024-04-24 20:57:16.374419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:51.747 [2024-04-24 20:57:16.374433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:51.747 [2024-04-24 20:57:16.374443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:51.747 [2024-04-24 20:57:16.374676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:51.747 [2024-04-24 20:57:16.374902] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:51.747 [2024-04-24 20:57:16.374912] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:51.747 [2024-04-24 20:57:16.374920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:51.747 [2024-04-24 20:57:16.378388] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.011 [2024-04-24 20:57:16.387190] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.011 [2024-04-24 20:57:16.387832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.388142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.388155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.011 [2024-04-24 20:57:16.388165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.011 [2024-04-24 20:57:16.388398] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.011 [2024-04-24 20:57:16.388615] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.011 [2024-04-24 20:57:16.388624] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.011 [2024-04-24 20:57:16.388631] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.011 [2024-04-24 20:57:16.392107] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.011 [2024-04-24 20:57:16.400897] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.011 [2024-04-24 20:57:16.401512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.401874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.401889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.011 [2024-04-24 20:57:16.401899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.011 [2024-04-24 20:57:16.402132] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.011 [2024-04-24 20:57:16.402350] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.011 [2024-04-24 20:57:16.402359] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.011 [2024-04-24 20:57:16.402366] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.011 [2024-04-24 20:57:16.405838] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.011 [2024-04-24 20:57:16.414629] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.011 [2024-04-24 20:57:16.415272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.415590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.415604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.011 [2024-04-24 20:57:16.415614] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.011 [2024-04-24 20:57:16.415855] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.011 [2024-04-24 20:57:16.416073] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.011 [2024-04-24 20:57:16.416082] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.011 [2024-04-24 20:57:16.416089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.011 [2024-04-24 20:57:16.419556] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.011 [2024-04-24 20:57:16.428353] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.011 [2024-04-24 20:57:16.429029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.429405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.429420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.011 [2024-04-24 20:57:16.429430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.011 [2024-04-24 20:57:16.429664] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.011 [2024-04-24 20:57:16.429890] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.011 [2024-04-24 20:57:16.429899] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.011 [2024-04-24 20:57:16.429906] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.011 [2024-04-24 20:57:16.433372] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.011 [2024-04-24 20:57:16.442162] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.011 [2024-04-24 20:57:16.442735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.443018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.443034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.011 [2024-04-24 20:57:16.443042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.011 [2024-04-24 20:57:16.443257] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.011 [2024-04-24 20:57:16.443473] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.011 [2024-04-24 20:57:16.443482] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.011 [2024-04-24 20:57:16.443489] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.011 [2024-04-24 20:57:16.446958] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.011 [2024-04-24 20:57:16.455945] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.011 [2024-04-24 20:57:16.456585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.456954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.456970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.011 [2024-04-24 20:57:16.456979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.011 [2024-04-24 20:57:16.457211] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.011 [2024-04-24 20:57:16.457429] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.011 [2024-04-24 20:57:16.457438] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.011 [2024-04-24 20:57:16.457446] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.011 [2024-04-24 20:57:16.460918] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.011 [2024-04-24 20:57:16.469697] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.011 [2024-04-24 20:57:16.470346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.470684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.470698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.011 [2024-04-24 20:57:16.470708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.011 [2024-04-24 20:57:16.470949] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.011 [2024-04-24 20:57:16.471168] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.011 [2024-04-24 20:57:16.471177] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.011 [2024-04-24 20:57:16.471184] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.011 [2024-04-24 20:57:16.474658] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.011 [2024-04-24 20:57:16.483452] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.011 [2024-04-24 20:57:16.484098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.484482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.484496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.011 [2024-04-24 20:57:16.484509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.011 [2024-04-24 20:57:16.484751] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.011 [2024-04-24 20:57:16.484969] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.011 [2024-04-24 20:57:16.484980] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.011 [2024-04-24 20:57:16.484987] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.011 [2024-04-24 20:57:16.488456] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.011 [2024-04-24 20:57:16.497239] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.011 [2024-04-24 20:57:16.497866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.498209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.011 [2024-04-24 20:57:16.498223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.011 [2024-04-24 20:57:16.498232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.011 [2024-04-24 20:57:16.498465] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.011 [2024-04-24 20:57:16.498683] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.011 [2024-04-24 20:57:16.498692] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.011 [2024-04-24 20:57:16.498700] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.011 [2024-04-24 20:57:16.502177] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.011 [2024-04-24 20:57:16.510965] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.012 [2024-04-24 20:57:16.511379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.511723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.511748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.012 [2024-04-24 20:57:16.511756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.012 [2024-04-24 20:57:16.511974] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.012 [2024-04-24 20:57:16.512189] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.012 [2024-04-24 20:57:16.512197] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.012 [2024-04-24 20:57:16.512204] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.012 [2024-04-24 20:57:16.515671] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.012 [2024-04-24 20:57:16.524662] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.012 [2024-04-24 20:57:16.525323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.525674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.525688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.012 [2024-04-24 20:57:16.525697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.012 [2024-04-24 20:57:16.525942] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.012 [2024-04-24 20:57:16.526161] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.012 [2024-04-24 20:57:16.526170] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.012 [2024-04-24 20:57:16.526177] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.012 [2024-04-24 20:57:16.529642] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.012 [2024-04-24 20:57:16.538429] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.012 [2024-04-24 20:57:16.538970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.539348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.539362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.012 [2024-04-24 20:57:16.539371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.012 [2024-04-24 20:57:16.539604] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.012 [2024-04-24 20:57:16.539829] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.012 [2024-04-24 20:57:16.539839] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.012 [2024-04-24 20:57:16.539846] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.012 [2024-04-24 20:57:16.543316] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.012 [2024-04-24 20:57:16.552311] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.012 [2024-04-24 20:57:16.552856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.553237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.553251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.012 [2024-04-24 20:57:16.553260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.012 [2024-04-24 20:57:16.553493] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.012 [2024-04-24 20:57:16.553712] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.012 [2024-04-24 20:57:16.553720] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.012 [2024-04-24 20:57:16.553735] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.012 [2024-04-24 20:57:16.557206] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.012 [2024-04-24 20:57:16.566064] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.012 [2024-04-24 20:57:16.566687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.567083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.567098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.012 [2024-04-24 20:57:16.567107] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.012 [2024-04-24 20:57:16.567341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.012 [2024-04-24 20:57:16.567563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.012 [2024-04-24 20:57:16.567572] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.012 [2024-04-24 20:57:16.567579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.012 [2024-04-24 20:57:16.571053] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.012 [2024-04-24 20:57:16.579849] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.012 [2024-04-24 20:57:16.580412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.580716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.580733] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.012 [2024-04-24 20:57:16.580742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.012 [2024-04-24 20:57:16.580957] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.012 [2024-04-24 20:57:16.581171] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.012 [2024-04-24 20:57:16.581180] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.012 [2024-04-24 20:57:16.581187] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.012 [2024-04-24 20:57:16.584665] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.012 [2024-04-24 20:57:16.593656] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.012 [2024-04-24 20:57:16.594283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.594669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.594683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.012 [2024-04-24 20:57:16.594692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.012 [2024-04-24 20:57:16.594933] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.012 [2024-04-24 20:57:16.595151] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.012 [2024-04-24 20:57:16.595160] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.012 [2024-04-24 20:57:16.595167] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.012 [2024-04-24 20:57:16.598637] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.012 [2024-04-24 20:57:16.607425] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.012 [2024-04-24 20:57:16.608080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.608462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.608475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.012 [2024-04-24 20:57:16.608485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.012 [2024-04-24 20:57:16.608718] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.012 [2024-04-24 20:57:16.608944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.012 [2024-04-24 20:57:16.608958] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.012 [2024-04-24 20:57:16.608966] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.012 [2024-04-24 20:57:16.612435] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.012 [2024-04-24 20:57:16.621228] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.012 [2024-04-24 20:57:16.621825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.622201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.622216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.012 [2024-04-24 20:57:16.622225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.012 [2024-04-24 20:57:16.622458] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.012 [2024-04-24 20:57:16.622675] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.012 [2024-04-24 20:57:16.622684] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.012 [2024-04-24 20:57:16.622691] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.012 [2024-04-24 20:57:16.626164] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.012 [2024-04-24 20:57:16.634955] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.012 [2024-04-24 20:57:16.635523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.635864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.012 [2024-04-24 20:57:16.635876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.012 [2024-04-24 20:57:16.635883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.013 [2024-04-24 20:57:16.636098] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.013 [2024-04-24 20:57:16.636312] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.013 [2024-04-24 20:57:16.636321] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.013 [2024-04-24 20:57:16.636328] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.013 [2024-04-24 20:57:16.639799] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.276 [2024-04-24 20:57:16.648804] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.276 [2024-04-24 20:57:16.649445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.276 [2024-04-24 20:57:16.649780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.276 [2024-04-24 20:57:16.649795] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.276 [2024-04-24 20:57:16.649805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.276 [2024-04-24 20:57:16.650038] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.276 [2024-04-24 20:57:16.650255] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.276 [2024-04-24 20:57:16.650265] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.276 [2024-04-24 20:57:16.650277] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.276 [2024-04-24 20:57:16.653749] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.276 [2024-04-24 20:57:16.662546] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.276 [2024-04-24 20:57:16.663080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.276 [2024-04-24 20:57:16.663461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.276 [2024-04-24 20:57:16.663475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.276 [2024-04-24 20:57:16.663484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.276 [2024-04-24 20:57:16.663717] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.276 [2024-04-24 20:57:16.663944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.277 [2024-04-24 20:57:16.663954] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.277 [2024-04-24 20:57:16.663961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.277 [2024-04-24 20:57:16.667429] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.277 [2024-04-24 20:57:16.676420] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.277 [2024-04-24 20:57:16.677071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.677452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.677466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.277 [2024-04-24 20:57:16.677476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.277 [2024-04-24 20:57:16.677709] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.277 [2024-04-24 20:57:16.677936] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.277 [2024-04-24 20:57:16.677946] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.277 [2024-04-24 20:57:16.677953] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.277 [2024-04-24 20:57:16.681427] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.277 [2024-04-24 20:57:16.690235] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.277 [2024-04-24 20:57:16.690826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.691208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.691222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.277 [2024-04-24 20:57:16.691232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.277 [2024-04-24 20:57:16.691465] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.277 [2024-04-24 20:57:16.691682] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.277 [2024-04-24 20:57:16.691691] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.277 [2024-04-24 20:57:16.691699] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.277 [2024-04-24 20:57:16.695182] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.277 [2024-04-24 20:57:16.703971] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.277 [2024-04-24 20:57:16.704619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.704981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.704996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.277 [2024-04-24 20:57:16.705005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.277 [2024-04-24 20:57:16.705238] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.277 [2024-04-24 20:57:16.705456] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.277 [2024-04-24 20:57:16.705465] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.277 [2024-04-24 20:57:16.705472] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.277 [2024-04-24 20:57:16.708944] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.277 [2024-04-24 20:57:16.717742] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.277 [2024-04-24 20:57:16.718381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.718760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.718775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.277 [2024-04-24 20:57:16.718784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.277 [2024-04-24 20:57:16.719017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.277 [2024-04-24 20:57:16.719235] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.277 [2024-04-24 20:57:16.719244] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.277 [2024-04-24 20:57:16.719251] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.277 [2024-04-24 20:57:16.722721] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.277 [2024-04-24 20:57:16.731511] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.277 [2024-04-24 20:57:16.732173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.732539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.732552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.277 [2024-04-24 20:57:16.732562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.277 [2024-04-24 20:57:16.732803] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.277 [2024-04-24 20:57:16.733022] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.277 [2024-04-24 20:57:16.733031] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.277 [2024-04-24 20:57:16.733038] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.277 [2024-04-24 20:57:16.736507] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.277 [2024-04-24 20:57:16.745301] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.277 [2024-04-24 20:57:16.745970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.746308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.746321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.277 [2024-04-24 20:57:16.746331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.277 [2024-04-24 20:57:16.746564] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.277 [2024-04-24 20:57:16.746788] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.277 [2024-04-24 20:57:16.746799] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.277 [2024-04-24 20:57:16.746807] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.277 [2024-04-24 20:57:16.750279] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.277 [2024-04-24 20:57:16.759072] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.277 [2024-04-24 20:57:16.759759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.759959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.759972] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.277 [2024-04-24 20:57:16.759981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.277 [2024-04-24 20:57:16.760214] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.277 [2024-04-24 20:57:16.760433] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.277 [2024-04-24 20:57:16.760442] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.277 [2024-04-24 20:57:16.760450] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.277 [2024-04-24 20:57:16.763926] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.277 [2024-04-24 20:57:16.772921] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.277 [2024-04-24 20:57:16.773580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.773960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.277 [2024-04-24 20:57:16.773974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.277 [2024-04-24 20:57:16.773984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.278 [2024-04-24 20:57:16.774217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.278 [2024-04-24 20:57:16.774435] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.278 [2024-04-24 20:57:16.774443] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.278 [2024-04-24 20:57:16.774451] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.278 [2024-04-24 20:57:16.777926] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.278 [2024-04-24 20:57:16.786728] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.278 [2024-04-24 20:57:16.787397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.787775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.787790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.278 [2024-04-24 20:57:16.787800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.278 [2024-04-24 20:57:16.788032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.278 [2024-04-24 20:57:16.788250] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.278 [2024-04-24 20:57:16.788258] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.278 [2024-04-24 20:57:16.788265] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.278 [2024-04-24 20:57:16.791742] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.278 [2024-04-24 20:57:16.800526] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.278 [2024-04-24 20:57:16.801030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.801339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.801353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.278 [2024-04-24 20:57:16.801363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.278 [2024-04-24 20:57:16.801596] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.278 [2024-04-24 20:57:16.801823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.278 [2024-04-24 20:57:16.801833] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.278 [2024-04-24 20:57:16.801840] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.278 [2024-04-24 20:57:16.805310] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.278 [2024-04-24 20:57:16.814303] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.278 [2024-04-24 20:57:16.814957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.815338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.815352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.278 [2024-04-24 20:57:16.815361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.278 [2024-04-24 20:57:16.815594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.278 [2024-04-24 20:57:16.815821] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.278 [2024-04-24 20:57:16.815831] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.278 [2024-04-24 20:57:16.815838] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.278 [2024-04-24 20:57:16.819308] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.278 [2024-04-24 20:57:16.828097] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.278 [2024-04-24 20:57:16.828741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.829085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.829102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.278 [2024-04-24 20:57:16.829112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.278 [2024-04-24 20:57:16.829345] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.278 [2024-04-24 20:57:16.829563] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.278 [2024-04-24 20:57:16.829572] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.278 [2024-04-24 20:57:16.829579] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.278 [2024-04-24 20:57:16.833055] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.278 [2024-04-24 20:57:16.841844] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.278 [2024-04-24 20:57:16.842470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.842826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.842840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.278 [2024-04-24 20:57:16.842850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.278 [2024-04-24 20:57:16.843083] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.278 [2024-04-24 20:57:16.843301] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.278 [2024-04-24 20:57:16.843310] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.278 [2024-04-24 20:57:16.843317] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.278 [2024-04-24 20:57:16.846793] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.278 [2024-04-24 20:57:16.855581] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.278 [2024-04-24 20:57:16.856231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.856611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.856624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.278 [2024-04-24 20:57:16.856634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.278 [2024-04-24 20:57:16.856877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.278 [2024-04-24 20:57:16.857096] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.278 [2024-04-24 20:57:16.857105] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.278 [2024-04-24 20:57:16.857112] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.278 [2024-04-24 20:57:16.860580] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.278 [2024-04-24 20:57:16.869371] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.278 [2024-04-24 20:57:16.869905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.870297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.870311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.278 [2024-04-24 20:57:16.870324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.278 [2024-04-24 20:57:16.870558] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.278 [2024-04-24 20:57:16.870784] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.278 [2024-04-24 20:57:16.870794] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.278 [2024-04-24 20:57:16.870801] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.278 [2024-04-24 20:57:16.874269] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.278 [2024-04-24 20:57:16.883264] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.278 [2024-04-24 20:57:16.883563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.278 [2024-04-24 20:57:16.883839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.279 [2024-04-24 20:57:16.883851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.279 [2024-04-24 20:57:16.883859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.279 [2024-04-24 20:57:16.884076] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.279 [2024-04-24 20:57:16.884292] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.279 [2024-04-24 20:57:16.884300] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.279 [2024-04-24 20:57:16.884307] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.279 [2024-04-24 20:57:16.887778] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.279 [2024-04-24 20:57:16.896971] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.279 [2024-04-24 20:57:16.897484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.279 [2024-04-24 20:57:16.897804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.279 [2024-04-24 20:57:16.897816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.279 [2024-04-24 20:57:16.897823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.279 [2024-04-24 20:57:16.898037] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.279 [2024-04-24 20:57:16.898252] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.279 [2024-04-24 20:57:16.898261] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.279 [2024-04-24 20:57:16.898268] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.279 [2024-04-24 20:57:16.901731] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.279 [2024-04-24 20:57:16.910712] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.279 [2024-04-24 20:57:16.911374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.279 [2024-04-24 20:57:16.911711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.279 [2024-04-24 20:57:16.911734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.279 [2024-04-24 20:57:16.911744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.279 [2024-04-24 20:57:16.911982] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.279 [2024-04-24 20:57:16.912200] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.279 [2024-04-24 20:57:16.912210] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.279 [2024-04-24 20:57:16.912218] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.542 [2024-04-24 20:57:16.915690] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.542 [2024-04-24 20:57:16.924493] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.542 [2024-04-24 20:57:16.925119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.542 [2024-04-24 20:57:16.925346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.542 [2024-04-24 20:57:16.925360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.542 [2024-04-24 20:57:16.925369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.542 [2024-04-24 20:57:16.925602] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.542 [2024-04-24 20:57:16.925830] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.542 [2024-04-24 20:57:16.925840] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.542 [2024-04-24 20:57:16.925847] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.542 [2024-04-24 20:57:16.929316] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.542 [2024-04-24 20:57:16.938329] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.542 [2024-04-24 20:57:16.939037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.542 [2024-04-24 20:57:16.939371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.542 [2024-04-24 20:57:16.939386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.542 [2024-04-24 20:57:16.939396] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.542 [2024-04-24 20:57:16.939628] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.542 [2024-04-24 20:57:16.939855] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.542 [2024-04-24 20:57:16.939866] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.542 [2024-04-24 20:57:16.939874] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.542 [2024-04-24 20:57:16.943347] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.542 [2024-04-24 20:57:16.952150] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.543 [2024-04-24 20:57:16.952823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:16.953157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:16.953171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.543 [2024-04-24 20:57:16.953181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.543 [2024-04-24 20:57:16.953414] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.543 [2024-04-24 20:57:16.953636] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.543 [2024-04-24 20:57:16.953645] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.543 [2024-04-24 20:57:16.953652] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.543 [2024-04-24 20:57:16.957126] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.543 [2024-04-24 20:57:16.965911] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.543 [2024-04-24 20:57:16.966478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:16.966798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:16.966809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.543 [2024-04-24 20:57:16.966817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.543 [2024-04-24 20:57:16.967032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.543 [2024-04-24 20:57:16.967246] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.543 [2024-04-24 20:57:16.967254] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.543 [2024-04-24 20:57:16.967261] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.543 [2024-04-24 20:57:16.970729] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.543 [2024-04-24 20:57:16.979714] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.543 [2024-04-24 20:57:16.980367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:16.980748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:16.980763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.543 [2024-04-24 20:57:16.980772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.543 [2024-04-24 20:57:16.981006] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.543 [2024-04-24 20:57:16.981224] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.543 [2024-04-24 20:57:16.981233] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.543 [2024-04-24 20:57:16.981240] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.543 [2024-04-24 20:57:16.984723] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.543 [2024-04-24 20:57:16.993514] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.543 [2024-04-24 20:57:16.994163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:16.994538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:16.994551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.543 [2024-04-24 20:57:16.994561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.543 [2024-04-24 20:57:16.994803] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.543 [2024-04-24 20:57:16.995022] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.543 [2024-04-24 20:57:16.995036] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.543 [2024-04-24 20:57:16.995043] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.543 [2024-04-24 20:57:16.998509] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.543 [2024-04-24 20:57:17.007299] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.543 [2024-04-24 20:57:17.007864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:17.008202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:17.008212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.543 [2024-04-24 20:57:17.008220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.543 [2024-04-24 20:57:17.008435] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.543 [2024-04-24 20:57:17.008658] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.543 [2024-04-24 20:57:17.008668] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.543 [2024-04-24 20:57:17.008676] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.543 [2024-04-24 20:57:17.012146] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.543 [2024-04-24 20:57:17.021153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.543 [2024-04-24 20:57:17.021811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:17.022187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:17.022201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.543 [2024-04-24 20:57:17.022210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.543 [2024-04-24 20:57:17.022443] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.543 [2024-04-24 20:57:17.022661] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.543 [2024-04-24 20:57:17.022670] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.543 [2024-04-24 20:57:17.022677] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.543 [2024-04-24 20:57:17.026152] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.543 [2024-04-24 20:57:17.034939] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.543 [2024-04-24 20:57:17.035564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:17.035871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:17.035886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.543 [2024-04-24 20:57:17.035896] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.543 [2024-04-24 20:57:17.036129] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.543 [2024-04-24 20:57:17.036346] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.543 [2024-04-24 20:57:17.036355] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.543 [2024-04-24 20:57:17.036367] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.543 [2024-04-24 20:57:17.039842] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.543 [2024-04-24 20:57:17.048834] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.543 [2024-04-24 20:57:17.049475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:17.049799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:17.049814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.543 [2024-04-24 20:57:17.049824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.543 [2024-04-24 20:57:17.050057] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.543 [2024-04-24 20:57:17.050274] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.543 [2024-04-24 20:57:17.050283] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.543 [2024-04-24 20:57:17.050291] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.543 [2024-04-24 20:57:17.053763] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.543 [2024-04-24 20:57:17.062556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.543 [2024-04-24 20:57:17.063090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:17.063424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.543 [2024-04-24 20:57:17.063438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.543 [2024-04-24 20:57:17.063447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.543 [2024-04-24 20:57:17.063680] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.543 [2024-04-24 20:57:17.063908] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.543 [2024-04-24 20:57:17.063918] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.543 [2024-04-24 20:57:17.063926] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.544 [2024-04-24 20:57:17.067399] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.544 [2024-04-24 20:57:17.076624] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.544 [2024-04-24 20:57:17.077302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.077682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.077696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.544 [2024-04-24 20:57:17.077706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.544 [2024-04-24 20:57:17.077948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.544 [2024-04-24 20:57:17.078167] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.544 [2024-04-24 20:57:17.078176] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.544 [2024-04-24 20:57:17.078183] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.544 [2024-04-24 20:57:17.081660] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.544 [2024-04-24 20:57:17.090479] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.544 [2024-04-24 20:57:17.090915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.091245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.091256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.544 [2024-04-24 20:57:17.091263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.544 [2024-04-24 20:57:17.091478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.544 [2024-04-24 20:57:17.091693] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.544 [2024-04-24 20:57:17.091701] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.544 [2024-04-24 20:57:17.091708] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.544 [2024-04-24 20:57:17.095185] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.544 [2024-04-24 20:57:17.104204] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.544 [2024-04-24 20:57:17.104720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.105116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.105129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.544 [2024-04-24 20:57:17.105139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.544 [2024-04-24 20:57:17.105372] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.544 [2024-04-24 20:57:17.105590] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.544 [2024-04-24 20:57:17.105599] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.544 [2024-04-24 20:57:17.105606] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.544 [2024-04-24 20:57:17.109084] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.544 [2024-04-24 20:57:17.118122] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.544 [2024-04-24 20:57:17.118684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.118984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.118996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.544 [2024-04-24 20:57:17.119004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.544 [2024-04-24 20:57:17.119218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.544 [2024-04-24 20:57:17.119433] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.544 [2024-04-24 20:57:17.119443] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.544 [2024-04-24 20:57:17.119450] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.544 [2024-04-24 20:57:17.122921] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2927289 Killed "${NVMF_APP[@]}" "$@" 00:25:52.544 20:57:17 -- host/bdevperf.sh@36 -- # tgt_init 00:25:52.544 20:57:17 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:52.544 20:57:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:52.544 20:57:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:52.544 20:57:17 -- common/autotest_common.sh@10 -- # set +x 00:25:52.544 [2024-04-24 20:57:17.131936] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.544 [2024-04-24 20:57:17.132592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.132959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.132974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.544 [2024-04-24 20:57:17.132984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.544 [2024-04-24 20:57:17.133217] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.544 [2024-04-24 20:57:17.133435] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.544 [2024-04-24 20:57:17.133444] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.544 [2024-04-24 20:57:17.133453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.544 [2024-04-24 20:57:17.136931] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.544 20:57:17 -- nvmf/common.sh@470 -- # nvmfpid=2929301 00:25:52.544 20:57:17 -- nvmf/common.sh@471 -- # waitforlisten 2929301 00:25:52.544 20:57:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:52.544 20:57:17 -- common/autotest_common.sh@817 -- # '[' -z 2929301 ']' 00:25:52.544 20:57:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.544 20:57:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:52.544 20:57:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.544 20:57:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:52.544 20:57:17 -- common/autotest_common.sh@10 -- # set +x 00:25:52.544 [2024-04-24 20:57:17.145742] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.544 [2024-04-24 20:57:17.146310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.146626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.146637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.544 [2024-04-24 20:57:17.146645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.544 [2024-04-24 20:57:17.146867] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.544 [2024-04-24 20:57:17.147083] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.544 [2024-04-24 20:57:17.147092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.544 [2024-04-24 20:57:17.147099] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.544 [2024-04-24 20:57:17.150572] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.544 [2024-04-24 20:57:17.159574] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.544 [2024-04-24 20:57:17.160131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.160446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.160457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.544 [2024-04-24 20:57:17.160465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.544 [2024-04-24 20:57:17.160681] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.544 [2024-04-24 20:57:17.160904] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.544 [2024-04-24 20:57:17.160913] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.544 [2024-04-24 20:57:17.160920] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.544 [2024-04-24 20:57:17.164387] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.544 [2024-04-24 20:57:17.173394] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.544 [2024-04-24 20:57:17.173901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.174309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.544 [2024-04-24 20:57:17.174323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.544 [2024-04-24 20:57:17.174333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.544 [2024-04-24 20:57:17.174566] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.544 [2024-04-24 20:57:17.174792] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.544 [2024-04-24 20:57:17.174801] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.544 [2024-04-24 20:57:17.174808] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.544 [2024-04-24 20:57:17.178278] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.808 [2024-04-24 20:57:17.187287] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.808 [2024-04-24 20:57:17.187888] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:25:52.808 [2024-04-24 20:57:17.187932] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:52.808 [2024-04-24 20:57:17.187964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.188349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.188362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.808 [2024-04-24 20:57:17.188372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.808 [2024-04-24 20:57:17.188605] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.808 [2024-04-24 20:57:17.188827] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.808 [2024-04-24 20:57:17.188836] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.808 [2024-04-24 20:57:17.188844] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.808 [2024-04-24 20:57:17.192315] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.808 [2024-04-24 20:57:17.201125] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.808 [2024-04-24 20:57:17.201687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.202035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.202047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.808 [2024-04-24 20:57:17.202055] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.808 [2024-04-24 20:57:17.202270] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.808 [2024-04-24 20:57:17.202486] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.808 [2024-04-24 20:57:17.202495] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.808 [2024-04-24 20:57:17.202502] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.808 [2024-04-24 20:57:17.205975] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.808 [2024-04-24 20:57:17.214974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.808 [2024-04-24 20:57:17.215406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.215733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.215745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.808 [2024-04-24 20:57:17.215752] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.808 [2024-04-24 20:57:17.215967] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.808 [2024-04-24 20:57:17.216181] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.808 [2024-04-24 20:57:17.216189] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.808 [2024-04-24 20:57:17.216196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.808 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.808 [2024-04-24 20:57:17.219659] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.808 [2024-04-24 20:57:17.228867] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.808 [2024-04-24 20:57:17.229460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.229710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.229731] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.808 [2024-04-24 20:57:17.229741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.808 [2024-04-24 20:57:17.229976] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.808 [2024-04-24 20:57:17.230194] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.808 [2024-04-24 20:57:17.230202] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.808 [2024-04-24 20:57:17.230210] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.808 [2024-04-24 20:57:17.233682] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.808 [2024-04-24 20:57:17.242685] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.808 [2024-04-24 20:57:17.243271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.243618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.243629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.808 [2024-04-24 20:57:17.243637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.808 [2024-04-24 20:57:17.243858] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.808 [2024-04-24 20:57:17.244073] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.808 [2024-04-24 20:57:17.244083] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.808 [2024-04-24 20:57:17.244090] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.808 [2024-04-24 20:57:17.247553] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.808 [2024-04-24 20:57:17.252282] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:52.808 [2024-04-24 20:57:17.256552] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.808 [2024-04-24 20:57:17.256977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.257350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.257360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.808 [2024-04-24 20:57:17.257368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.808 [2024-04-24 20:57:17.257582] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.808 [2024-04-24 20:57:17.257802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.808 [2024-04-24 20:57:17.257812] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.808 [2024-04-24 20:57:17.257819] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.808 [2024-04-24 20:57:17.261283] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.808 [2024-04-24 20:57:17.270285] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.808 [2024-04-24 20:57:17.270737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.808 [2024-04-24 20:57:17.271053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.271066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.809 [2024-04-24 20:57:17.271073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.809 [2024-04-24 20:57:17.271288] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.809 [2024-04-24 20:57:17.271503] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.809 [2024-04-24 20:57:17.271512] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.809 [2024-04-24 20:57:17.271519] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.809 [2024-04-24 20:57:17.274990] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.809 [2024-04-24 20:57:17.284001] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.809 [2024-04-24 20:57:17.284568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.285014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.285053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.809 [2024-04-24 20:57:17.285066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.809 [2024-04-24 20:57:17.285304] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.809 [2024-04-24 20:57:17.285524] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.809 [2024-04-24 20:57:17.285533] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.809 [2024-04-24 20:57:17.285541] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.809 [2024-04-24 20:57:17.289024] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.809 [2024-04-24 20:57:17.297917] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.809 [2024-04-24 20:57:17.298466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.298941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.298979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.809 [2024-04-24 20:57:17.298990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.809 [2024-04-24 20:57:17.299223] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.809 [2024-04-24 20:57:17.299441] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.809 [2024-04-24 20:57:17.299450] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.809 [2024-04-24 20:57:17.299458] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.809 [2024-04-24 20:57:17.302940] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.809 [2024-04-24 20:57:17.311744] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.809 [2024-04-24 20:57:17.312284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.312629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.312640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.809 [2024-04-24 20:57:17.312648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.809 [2024-04-24 20:57:17.312870] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.809 [2024-04-24 20:57:17.313086] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.809 [2024-04-24 20:57:17.313095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.809 [2024-04-24 20:57:17.313103] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.809 [2024-04-24 20:57:17.314857] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:52.809 [2024-04-24 20:57:17.314884] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:52.809 [2024-04-24 20:57:17.314892] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:52.809 [2024-04-24 20:57:17.314899] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:52.809 [2024-04-24 20:57:17.314908] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:52.809 [2024-04-24 20:57:17.315016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:52.809 [2024-04-24 20:57:17.315173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.809 [2024-04-24 20:57:17.315173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:52.809 [2024-04-24 20:57:17.316566] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.809 [2024-04-24 20:57:17.325565] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.809 [2024-04-24 20:57:17.326010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.326346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.326357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.809 [2024-04-24 20:57:17.326365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.809 [2024-04-24 20:57:17.326581] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.809 [2024-04-24 20:57:17.326802] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.809 [2024-04-24 20:57:17.326811] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.809 [2024-04-24 20:57:17.326818] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.809 [2024-04-24 20:57:17.330284] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.809 [2024-04-24 20:57:17.339283] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.809 [2024-04-24 20:57:17.339983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.340375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.340389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.809 [2024-04-24 20:57:17.340399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.809 [2024-04-24 20:57:17.340635] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.809 [2024-04-24 20:57:17.340860] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.809 [2024-04-24 20:57:17.340869] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.809 [2024-04-24 20:57:17.340877] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.809 [2024-04-24 20:57:17.344346] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.809 [2024-04-24 20:57:17.353138] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.809 [2024-04-24 20:57:17.353815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.354159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.354174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.809 [2024-04-24 20:57:17.354183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.809 [2024-04-24 20:57:17.354419] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.809 [2024-04-24 20:57:17.354638] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.809 [2024-04-24 20:57:17.354657] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.809 [2024-04-24 20:57:17.354665] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.809 [2024-04-24 20:57:17.358144] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.809 [2024-04-24 20:57:17.366944] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.809 [2024-04-24 20:57:17.367560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.367986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.368001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.809 [2024-04-24 20:57:17.368010] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.809 [2024-04-24 20:57:17.368244] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.809 [2024-04-24 20:57:17.368461] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.809 [2024-04-24 20:57:17.368471] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.809 [2024-04-24 20:57:17.368479] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.809 [2024-04-24 20:57:17.371957] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.809 [2024-04-24 20:57:17.380754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.809 [2024-04-24 20:57:17.381390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.381625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.809 [2024-04-24 20:57:17.381638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.809 [2024-04-24 20:57:17.381648] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.809 [2024-04-24 20:57:17.381888] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.809 [2024-04-24 20:57:17.382107] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.809 [2024-04-24 20:57:17.382116] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.809 [2024-04-24 20:57:17.382124] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.809 [2024-04-24 20:57:17.385608] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.810 [2024-04-24 20:57:17.394610] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.810 [2024-04-24 20:57:17.395248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.810 [2024-04-24 20:57:17.395599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.810 [2024-04-24 20:57:17.395612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.810 [2024-04-24 20:57:17.395622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.810 [2024-04-24 20:57:17.395862] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.810 [2024-04-24 20:57:17.396080] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.810 [2024-04-24 20:57:17.396089] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.810 [2024-04-24 20:57:17.396101] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.810 [2024-04-24 20:57:17.399570] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.810 20:57:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:52.810 20:57:17 -- common/autotest_common.sh@850 -- # return 0 00:25:52.810 20:57:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:52.810 20:57:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:52.810 20:57:17 -- common/autotest_common.sh@10 -- # set +x 00:25:52.810 [2024-04-24 20:57:17.408370] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.810 [2024-04-24 20:57:17.408968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.810 [2024-04-24 20:57:17.409152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.810 [2024-04-24 20:57:17.409162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.810 [2024-04-24 20:57:17.409170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.810 [2024-04-24 20:57:17.409386] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.810 [2024-04-24 20:57:17.409602] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.810 [2024-04-24 20:57:17.409611] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.810 [2024-04-24 20:57:17.409617] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.810 [2024-04-24 20:57:17.413087] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.810 [2024-04-24 20:57:17.422087] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.810 [2024-04-24 20:57:17.422649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.810 [2024-04-24 20:57:17.423027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.810 [2024-04-24 20:57:17.423039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.810 [2024-04-24 20:57:17.423046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.810 [2024-04-24 20:57:17.423261] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.810 [2024-04-24 20:57:17.423475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.810 [2024-04-24 20:57:17.423486] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.810 [2024-04-24 20:57:17.423492] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.810 [2024-04-24 20:57:17.426960] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.810 [2024-04-24 20:57:17.435958] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:52.810 [2024-04-24 20:57:17.436628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.810 [2024-04-24 20:57:17.437029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:52.810 [2024-04-24 20:57:17.437044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:52.810 [2024-04-24 20:57:17.437054] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:52.810 [2024-04-24 20:57:17.437287] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:52.810 [2024-04-24 20:57:17.437505] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:52.810 [2024-04-24 20:57:17.437520] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:52.810 [2024-04-24 20:57:17.437528] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:52.810 [2024-04-24 20:57:17.441004] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:52.810 20:57:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:52.810 20:57:17 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:52.810 20:57:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.810 20:57:17 -- common/autotest_common.sh@10 -- # set +x 00:25:53.072 [2024-04-24 20:57:17.448895] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:53.072 [2024-04-24 20:57:17.449805] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.072 [2024-04-24 20:57:17.450381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.072 [2024-04-24 20:57:17.450731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.072 [2024-04-24 20:57:17.450743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:53.072 [2024-04-24 20:57:17.450751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:53.072 [2024-04-24 20:57:17.450966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:53.072 [2024-04-24 20:57:17.451181] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.072 [2024-04-24 20:57:17.451190] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.072 [2024-04-24 20:57:17.451196] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.072 20:57:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.072 20:57:17 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:53.072 [2024-04-24 20:57:17.454661] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.072 20:57:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.072 20:57:17 -- common/autotest_common.sh@10 -- # set +x 00:25:53.072 [2024-04-24 20:57:17.463657] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.072 [2024-04-24 20:57:17.464235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.072 [2024-04-24 20:57:17.464427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.072 [2024-04-24 20:57:17.464438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:53.072 [2024-04-24 20:57:17.464445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:53.072 [2024-04-24 20:57:17.464659] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:53.072 [2024-04-24 20:57:17.464880] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.072 [2024-04-24 20:57:17.464889] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.072 [2024-04-24 20:57:17.464896] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.072 [2024-04-24 20:57:17.468359] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.072 [2024-04-24 20:57:17.477553] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.072 [2024-04-24 20:57:17.478218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.072 [2024-04-24 20:57:17.478598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.072 [2024-04-24 20:57:17.478618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:53.072 [2024-04-24 20:57:17.478628] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:53.072 [2024-04-24 20:57:17.478869] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:53.072 [2024-04-24 20:57:17.479088] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.072 [2024-04-24 20:57:17.479097] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.072 [2024-04-24 20:57:17.479105] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.072 [2024-04-24 20:57:17.482574] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.072 Malloc0 00:25:53.072 20:57:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.072 20:57:17 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:53.072 20:57:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.072 20:57:17 -- common/autotest_common.sh@10 -- # set +x 00:25:53.072 [2024-04-24 20:57:17.491383] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.072 [2024-04-24 20:57:17.492056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.072 [2024-04-24 20:57:17.492412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.072 [2024-04-24 20:57:17.492426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:53.072 [2024-04-24 20:57:17.492436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:53.072 [2024-04-24 20:57:17.492669] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:53.072 [2024-04-24 20:57:17.492894] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.072 [2024-04-24 20:57:17.492904] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.072 [2024-04-24 20:57:17.492911] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.072 [2024-04-24 20:57:17.496381] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.072 20:57:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.072 20:57:17 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:53.072 20:57:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.072 20:57:17 -- common/autotest_common.sh@10 -- # set +x 00:25:53.072 [2024-04-24 20:57:17.505171] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.072 [2024-04-24 20:57:17.505853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.072 [2024-04-24 20:57:17.506205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.072 [2024-04-24 20:57:17.506219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b1f30 with addr=10.0.0.2, port=4420 00:25:53.072 [2024-04-24 20:57:17.506228] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b1f30 is same with the state(5) to be set 00:25:53.072 [2024-04-24 20:57:17.506461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b1f30 (9): Bad file descriptor 00:25:53.072 [2024-04-24 20:57:17.506679] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.072 [2024-04-24 20:57:17.506688] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.072 [2024-04-24 20:57:17.506696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.072 20:57:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.072 20:57:17 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.072 [2024-04-24 20:57:17.510177] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.072 20:57:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.072 20:57:17 -- common/autotest_common.sh@10 -- # set +x 00:25:53.072 [2024-04-24 20:57:17.516849] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:53.072 [2024-04-24 20:57:17.518976] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.072 20:57:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.072 20:57:17 -- host/bdevperf.sh@38 -- # wait 2928245 00:25:53.072 [2024-04-24 20:57:17.562438] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:03.070 00:26:03.070 Latency(us) 00:26:03.070 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:03.070 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:03.070 Verification LBA range: start 0x0 length 0x4000 00:26:03.070 Nvme1n1 : 15.01 6924.20 27.05 8324.91 0.00 8368.42 771.41 19551.57 00:26:03.070 =================================================================================================================== 00:26:03.070 Total : 6924.20 27.05 8324.91 0.00 8368.42 771.41 19551.57 00:26:03.070 20:57:26 -- host/bdevperf.sh@39 -- # sync 00:26:03.070 20:57:26 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:03.070 20:57:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.070 20:57:26 -- common/autotest_common.sh@10 -- # set +x 00:26:03.070 20:57:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.070 20:57:26 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:03.070 20:57:26 -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:03.070 20:57:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:03.070 20:57:26 -- nvmf/common.sh@117 -- # sync 00:26:03.070 20:57:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:03.070 20:57:26 -- nvmf/common.sh@120 -- # set +e 00:26:03.070 20:57:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:03.070 20:57:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:03.070 rmmod nvme_tcp 00:26:03.070 rmmod nvme_fabrics 00:26:03.070 rmmod nvme_keyring 00:26:03.070 20:57:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:03.070 20:57:26 -- nvmf/common.sh@124 -- # set -e 00:26:03.070 20:57:26 -- nvmf/common.sh@125 -- # return 0 00:26:03.070 20:57:26 -- nvmf/common.sh@478 -- # '[' -n 2929301 ']' 00:26:03.070 20:57:26 -- nvmf/common.sh@479 -- # killprocess 2929301 00:26:03.070 20:57:26 -- common/autotest_common.sh@936 -- # '[' -z 2929301 ']' 00:26:03.070 20:57:26 -- common/autotest_common.sh@940 -- # kill -0 2929301 00:26:03.070 20:57:26 -- common/autotest_common.sh@941 -- # uname 00:26:03.070 20:57:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:03.070 20:57:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2929301 00:26:03.070 20:57:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:03.070 20:57:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:03.070 20:57:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2929301' 00:26:03.070 killing process with pid 2929301 00:26:03.070 20:57:26 -- common/autotest_common.sh@955 -- # kill 2929301 00:26:03.070 20:57:26 -- common/autotest_common.sh@960 -- # wait 2929301 00:26:03.070 20:57:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:03.070 20:57:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:03.070 20:57:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:03.070 20:57:26 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:03.070 20:57:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:03.070 20:57:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:03.070 20:57:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:03.070 20:57:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.454 20:57:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:04.454 00:26:04.454 real 0m27.830s 00:26:04.454 user 1m2.901s 00:26:04.454 sys 0m7.076s 00:26:04.454 20:57:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:04.454 20:57:29 -- common/autotest_common.sh@10 -- # set +x 00:26:04.454 ************************************ 00:26:04.454 END TEST nvmf_bdevperf 00:26:04.454 ************************************ 00:26:04.454 20:57:29 -- nvmf/nvmf.sh@120 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:04.454 20:57:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:04.454 20:57:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:04.454 20:57:29 -- common/autotest_common.sh@10 -- # set +x 00:26:04.769 ************************************ 00:26:04.769 START TEST nvmf_target_disconnect 00:26:04.769 ************************************ 00:26:04.769 20:57:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:04.769 * Looking for test storage... 00:26:04.769 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:04.769 20:57:29 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.769 20:57:29 -- nvmf/common.sh@7 -- # uname -s 00:26:04.769 20:57:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.769 20:57:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.769 20:57:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.769 20:57:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.769 20:57:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.769 20:57:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.769 20:57:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.769 20:57:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.769 20:57:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.769 20:57:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.769 20:57:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:04.769 20:57:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:04.769 20:57:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.769 20:57:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.769 20:57:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.769 20:57:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.769 20:57:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.769 20:57:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.769 20:57:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.769 20:57:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.769 20:57:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.769 20:57:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.769 20:57:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.769 20:57:29 -- paths/export.sh@5 -- # export PATH 00:26:04.769 20:57:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.769 20:57:29 -- nvmf/common.sh@47 -- # : 0 00:26:04.769 20:57:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:04.769 20:57:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:04.769 20:57:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.769 20:57:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.769 20:57:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.769 20:57:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:04.769 20:57:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:04.769 20:57:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:04.769 20:57:29 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:04.769 20:57:29 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:04.769 20:57:29 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:04.769 20:57:29 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:26:04.769 20:57:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:04.769 20:57:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.769 20:57:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:04.769 20:57:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:04.769 20:57:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:04.769 20:57:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.769 20:57:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.769 20:57:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.769 20:57:29 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:04.769 20:57:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:04.769 20:57:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:04.769 20:57:29 -- common/autotest_common.sh@10 -- # set +x 00:26:12.913 20:57:36 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:12.913 20:57:36 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:12.913 20:57:36 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:12.913 20:57:36 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:12.913 20:57:36 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:12.913 20:57:36 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:12.913 20:57:36 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:12.913 20:57:36 -- nvmf/common.sh@295 -- # net_devs=() 00:26:12.913 20:57:36 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:12.913 20:57:36 -- nvmf/common.sh@296 -- # e810=() 00:26:12.913 20:57:36 -- nvmf/common.sh@296 -- # local -ga e810 00:26:12.913 20:57:36 -- nvmf/common.sh@297 -- # x722=() 00:26:12.913 20:57:36 -- nvmf/common.sh@297 -- # local -ga x722 00:26:12.913 20:57:36 -- nvmf/common.sh@298 -- # mlx=() 00:26:12.913 20:57:36 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:12.913 20:57:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.913 20:57:36 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.913 20:57:36 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.913 20:57:36 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.913 20:57:36 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.913 20:57:36 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.913 20:57:36 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.913 20:57:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.913 20:57:36 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.913 20:57:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.913 20:57:36 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.913 20:57:36 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:12.913 20:57:36 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:12.913 20:57:36 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:12.913 20:57:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.913 20:57:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:12.913 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:12.913 20:57:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.913 20:57:36 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:12.913 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:12.913 20:57:36 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:12.913 20:57:36 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.913 20:57:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.913 20:57:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:12.913 20:57:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.913 20:57:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:12.913 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:12.913 20:57:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.913 20:57:36 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.913 20:57:36 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.913 20:57:36 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:12.913 20:57:36 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.913 20:57:36 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:12.913 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:12.913 20:57:36 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.913 20:57:36 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:12.913 20:57:36 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:12.913 20:57:36 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:12.913 20:57:36 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.913 20:57:36 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.913 20:57:36 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.913 20:57:36 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:12.913 20:57:36 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.913 20:57:36 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.913 20:57:36 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:12.913 20:57:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.913 20:57:36 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.913 20:57:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:12.913 20:57:36 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:12.913 20:57:36 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.913 20:57:36 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.913 20:57:36 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.913 20:57:36 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.913 20:57:36 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:12.913 20:57:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.913 20:57:36 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.913 20:57:36 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.913 20:57:36 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:12.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:26:12.913 00:26:12.913 --- 10.0.0.2 ping statistics --- 00:26:12.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.913 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:26:12.913 20:57:36 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:26:12.913 00:26:12.913 --- 10.0.0.1 ping statistics --- 00:26:12.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.913 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:26:12.913 20:57:36 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.913 20:57:36 -- nvmf/common.sh@411 -- # return 0 00:26:12.913 20:57:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:12.913 20:57:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.913 20:57:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:12.913 20:57:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.913 20:57:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:12.913 20:57:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:12.913 20:57:36 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:12.913 20:57:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:12.913 20:57:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:12.913 20:57:36 -- common/autotest_common.sh@10 -- # set +x 00:26:12.914 ************************************ 00:26:12.914 START TEST nvmf_target_disconnect_tc1 00:26:12.914 ************************************ 00:26:12.914 20:57:36 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc1 00:26:12.914 20:57:36 -- host/target_disconnect.sh@32 -- # set +e 00:26:12.914 20:57:36 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:12.914 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.914 [2024-04-24 20:57:36.984986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.914 [2024-04-24 20:57:36.985432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.914 [2024-04-24 20:57:36.985451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812bf0 with addr=10.0.0.2, port=4420 00:26:12.914 [2024-04-24 20:57:36.985493] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:12.914 [2024-04-24 20:57:36.985516] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:12.914 [2024-04-24 20:57:36.985527] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:12.914 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:12.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:12.914 Initializing NVMe Controllers 00:26:12.914 20:57:36 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:12.914 20:57:36 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:12.914 20:57:36 -- common/autotest_common.sh@1139 -- # [[ hxBET =~ e ]] 00:26:12.914 20:57:36 -- common/autotest_common.sh@1139 -- # return 0 00:26:12.914 20:57:36 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:12.914 20:57:36 -- host/target_disconnect.sh@41 -- # set -e 00:26:12.914 00:26:12.914 real 0m0.124s 00:26:12.914 user 0m0.053s 00:26:12.914 sys 0m0.069s 00:26:12.914 20:57:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:12.914 20:57:36 -- common/autotest_common.sh@10 -- # set +x 00:26:12.914 ************************************ 00:26:12.914 END TEST nvmf_target_disconnect_tc1 00:26:12.914 ************************************ 00:26:12.914 20:57:37 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:12.914 20:57:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:12.914 20:57:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:12.914 20:57:37 -- common/autotest_common.sh@10 -- # set +x 00:26:12.914 ************************************ 00:26:12.914 START TEST nvmf_target_disconnect_tc2 00:26:12.914 ************************************ 00:26:12.914 20:57:37 -- common/autotest_common.sh@1111 -- # nvmf_target_disconnect_tc2 00:26:12.914 20:57:37 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:12.914 20:57:37 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:12.914 20:57:37 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:12.914 20:57:37 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:12.914 20:57:37 -- common/autotest_common.sh@10 -- # set +x 00:26:12.914 20:57:37 -- nvmf/common.sh@470 -- # nvmfpid=2935546 00:26:12.914 20:57:37 -- nvmf/common.sh@471 -- # waitforlisten 2935546 00:26:12.914 20:57:37 -- common/autotest_common.sh@817 -- # '[' -z 2935546 ']' 00:26:12.914 20:57:37 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:12.914 20:57:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.914 20:57:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:12.914 20:57:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.914 20:57:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:12.914 20:57:37 -- common/autotest_common.sh@10 -- # set +x 00:26:12.914 [2024-04-24 20:57:37.266448] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:26:12.914 [2024-04-24 20:57:37.266506] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.914 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.914 [2024-04-24 20:57:37.357337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:12.914 [2024-04-24 20:57:37.451491] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.914 [2024-04-24 20:57:37.451550] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.914 [2024-04-24 20:57:37.451559] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.914 [2024-04-24 20:57:37.451566] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.914 [2024-04-24 20:57:37.451572] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.914 [2024-04-24 20:57:37.451759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:12.914 [2024-04-24 20:57:37.451938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:12.914 [2024-04-24 20:57:37.452098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:12.914 [2024-04-24 20:57:37.452098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:13.486 20:57:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:13.486 20:57:38 -- common/autotest_common.sh@850 -- # return 0 00:26:13.486 20:57:38 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:13.486 20:57:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:13.486 20:57:38 -- common/autotest_common.sh@10 -- # set +x 00:26:13.749 20:57:38 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.749 20:57:38 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:13.749 20:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.749 20:57:38 -- common/autotest_common.sh@10 -- # set +x 00:26:13.749 Malloc0 00:26:13.749 20:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.749 20:57:38 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:13.749 20:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.749 20:57:38 -- common/autotest_common.sh@10 -- # set +x 00:26:13.749 [2024-04-24 20:57:38.191347] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.749 20:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.749 20:57:38 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:13.749 20:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.749 20:57:38 -- common/autotest_common.sh@10 -- # set +x 00:26:13.749 20:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.749 20:57:38 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:13.749 20:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.749 20:57:38 -- common/autotest_common.sh@10 -- # set +x 00:26:13.749 20:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.749 20:57:38 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.749 20:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.749 20:57:38 -- common/autotest_common.sh@10 -- # set +x 00:26:13.749 [2024-04-24 20:57:38.231773] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.749 20:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.749 20:57:38 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:13.749 20:57:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:13.749 20:57:38 -- common/autotest_common.sh@10 -- # set +x 00:26:13.749 20:57:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:13.749 20:57:38 -- host/target_disconnect.sh@50 -- # reconnectpid=2935676 00:26:13.749 20:57:38 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:13.749 20:57:38 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:13.749 EAL: No free 2048 kB hugepages reported on node 1 00:26:15.668 20:57:40 -- host/target_disconnect.sh@53 -- # kill -9 2935546 00:26:15.668 20:57:40 -- host/target_disconnect.sh@55 -- # sleep 2 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 [2024-04-24 20:57:40.267302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Write completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.668 Read completed with error (sct=0, sc=8) 00:26:15.668 starting I/O failed 00:26:15.669 Read completed with error (sct=0, sc=8) 00:26:15.669 starting I/O failed 00:26:15.669 Write completed with error (sct=0, sc=8) 00:26:15.669 starting I/O failed 00:26:15.669 Write completed with error (sct=0, sc=8) 00:26:15.669 starting I/O failed 00:26:15.669 Write completed with error (sct=0, sc=8) 00:26:15.669 starting I/O failed 00:26:15.669 Write completed with error (sct=0, sc=8) 00:26:15.669 starting I/O failed 00:26:15.669 Write completed with error (sct=0, sc=8) 00:26:15.669 starting I/O failed 00:26:15.669 Read completed with error (sct=0, sc=8) 00:26:15.669 starting I/O failed 00:26:15.669 Write completed with error (sct=0, sc=8) 00:26:15.669 starting I/O failed 00:26:15.669 Write completed with error (sct=0, sc=8) 00:26:15.669 starting I/O failed 00:26:15.669 [2024-04-24 20:57:40.267641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:15.669 [2024-04-24 20:57:40.268232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.268490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.268509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.268978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.269286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.269302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.269642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.269954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.269996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.270333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.270696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.270708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.271088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.271494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.271509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.271987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.272348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.272364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.272687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.272944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.272957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.273260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.273579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.273592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.273930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.274275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.274288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.274634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.274848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.274863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.275203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.275558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.275571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.275900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.276227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.276240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.276412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.276697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.276710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.276936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.277251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.277264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.277565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.277992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.278007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.278330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.278647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.278661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.278856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.279150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.279161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.280383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.280735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.280749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.281068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.281388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.281401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.281713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.281967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.281980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.282302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.282556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.282568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.282874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.283241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.283254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.283588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.283915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.283927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.284249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.284562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.669 [2024-04-24 20:57:40.284574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.669 qpair failed and we were unable to recover it. 00:26:15.669 [2024-04-24 20:57:40.284894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.285217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.285227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.285567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.285762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.285774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.286067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.286383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.286395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.286707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.287048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.287061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.287419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.287778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.287791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.288021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.288219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.288231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.288556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.288868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.288879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.289230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.289546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.289558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.289857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.290205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.290218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.290541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.290750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.290764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.291059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.291391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.291403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.291633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.291949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.291961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.292315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.292604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.292615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.292993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.293318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.293329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.293625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.293943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.293955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.294270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.294545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.294557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.294932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.295256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.295271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.295585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.295915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.295931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.296257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.296586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.296600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.296940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.297239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.297254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.297471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.297770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.297784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.298111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.298439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.298452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.298669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.298933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.298947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.299251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.299594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.299611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.299940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.300276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.300290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.300506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.300828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.300844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.301182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.301509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.301523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.301868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.302180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.670 [2024-04-24 20:57:40.302194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.670 qpair failed and we were unable to recover it. 00:26:15.670 [2024-04-24 20:57:40.302518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.671 [2024-04-24 20:57:40.302852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.671 [2024-04-24 20:57:40.302866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.671 qpair failed and we were unable to recover it. 00:26:15.671 [2024-04-24 20:57:40.303152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.671 [2024-04-24 20:57:40.303493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.671 [2024-04-24 20:57:40.303509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.671 qpair failed and we were unable to recover it. 00:26:15.671 [2024-04-24 20:57:40.303746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.671 [2024-04-24 20:57:40.305216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.671 [2024-04-24 20:57:40.305248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.671 qpair failed and we were unable to recover it. 00:26:15.671 [2024-04-24 20:57:40.305585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.671 [2024-04-24 20:57:40.305704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.671 [2024-04-24 20:57:40.305716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.671 qpair failed and we were unable to recover it. 00:26:15.671 [2024-04-24 20:57:40.306017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.671 [2024-04-24 20:57:40.306355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.671 [2024-04-24 20:57:40.306373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.671 qpair failed and we were unable to recover it. 00:26:15.671 [2024-04-24 20:57:40.306698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.307120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.307146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.307470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.307803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.307821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.308258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.308947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.308977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.309292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.309653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.309672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.310014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.310216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.310235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.310558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.310911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.310930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.311274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.311560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.311579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.311910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.312256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.312274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.312605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.312922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.312940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.313288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.314624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.314655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.314976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.315320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.315345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.315661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.315969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.315988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.316319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.316667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.316686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.317010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.317357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.317374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.317701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.318034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.318053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.318345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.318686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.318708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.319046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.319246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.319270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.319640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.319968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.319991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.320243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.320588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.320610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.941 qpair failed and we were unable to recover it. 00:26:15.941 [2024-04-24 20:57:40.320963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.941 [2024-04-24 20:57:40.321323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.321345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.321686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.322031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.322058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.322424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.322649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.322676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.323034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.323391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.323415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.323753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.324027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.324050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.324431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.324780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.324803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.325155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.325540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.325561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.325870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.326227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.326249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.326613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.326963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.326985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.327331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.327683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.327704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.329342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.329750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.329784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.330197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.330513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.330541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.330886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.331235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.331264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.331599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.331922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.331951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.332306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.332642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.332671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.333046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.334664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.334714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.335078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.335425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.335455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.335788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.336157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.336186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.336549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.336888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.336919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.337277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.339098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.339150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.339485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.339768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.339801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.340176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.341812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.341861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.342173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.342517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.342547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.342914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.343259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.343288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.343672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.343999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.344030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.344420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.344774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.344804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.345200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.345555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.345583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.345965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.346309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.346337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.346629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.346983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.942 [2024-04-24 20:57:40.347011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.942 qpair failed and we were unable to recover it. 00:26:15.942 [2024-04-24 20:57:40.347374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.347738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.347768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.348163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.348506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.348536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.348877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.349207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.349236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.349624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.349947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.349977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.350351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.350673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.350701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.351056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.351403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.351432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.351793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.352117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.352145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.352471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.352801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.352831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.353234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.353567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.353596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.353887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.354249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.354277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.354634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.354961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.354989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.355364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.355745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.355776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.356033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.356278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.356306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.356574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.356937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.356966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.357301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.357648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.357677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.358043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.358281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.358312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.358647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.358992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.359023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.359373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.359692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.359722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.359991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.360349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.360377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.360748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.361106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.361134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.361536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.361847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.361876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.362133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.362520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.362548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.362815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.363166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.363195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.363623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.363963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.363993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.364361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.364716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.364756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.365105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.365458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.365486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.365740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.366129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.366157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.366485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.366769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.366800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.943 qpair failed and we were unable to recover it. 00:26:15.943 [2024-04-24 20:57:40.367160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.367491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.943 [2024-04-24 20:57:40.367520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.367872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.368299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.368327] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.368633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.368982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.369011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.369267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.369623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.369651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.369996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.370351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.370379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.370762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.371105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.371135] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.371495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.371860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.371890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.372272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.372626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.372654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.373020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.373379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.373408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.373764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.374096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.374124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.374409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.374762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.374792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.375185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.375415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.375443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.375779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.376120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.376148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.376511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.376837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.376866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.377283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.377535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.377562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.377914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.378286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.378314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.378580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.378736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.378765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.379203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.379554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.379582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.379927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.380269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.380299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.380643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.380994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.381024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.381368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.381722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.381764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.382110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.382352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.382382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.382720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.383091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.383119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.383453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.383708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.383750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.384106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.384473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.384502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.384810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.385172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.385199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.385571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.385829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.385856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.386211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.386576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.386604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.386907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.387286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.387314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.944 qpair failed and we were unable to recover it. 00:26:15.944 [2024-04-24 20:57:40.387581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.944 [2024-04-24 20:57:40.387925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.387955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.388370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.388683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.388713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.389068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.389401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.389430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.389834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.390036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.390064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.390403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.390777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.390805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.391139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.391461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.391506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.391756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.392137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.392165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.392481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.392849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.392879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.393316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.393692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.393720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.394085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.394452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.394479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.394771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.395116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.395145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.395397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.395781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.395810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.396200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.396523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.396551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.396944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.397175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.397204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.397559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.397921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.397950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.398285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.398648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.398676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.399034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.399394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.399425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.399842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.400173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.400201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.400592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.400851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.400880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.401137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.401472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.401500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.401840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.402193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.402222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.402566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.402915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.402944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.403323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.403676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.945 [2024-04-24 20:57:40.403706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.945 qpair failed and we were unable to recover it. 00:26:15.945 [2024-04-24 20:57:40.404112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.404490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.404519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.404872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.405233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.405261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.405555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.405913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.405942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.406317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.406746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.406776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.407200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.407559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.407587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.407948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.408329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.408358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.408757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.409139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.409167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.409445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.409679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.409706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.410036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.410308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.410335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.410702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.411148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.411177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.411518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.411789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.411819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.412116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.412467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.412495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.412824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.413209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.413238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.413594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.413976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.414006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.414376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.414748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.414777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.415163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.415407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.415434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.415804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.416215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.416243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.416411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.416812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.416840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.417230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.417555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.417585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.417871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.418242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.418272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.418546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.418866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.418895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.419236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.419614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.419643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.419908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.420287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.420314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.420669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.421025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.421060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.421422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.421780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.421810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.422205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.422460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.422488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.422751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.423141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.423170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.423425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.423744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.423773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.946 qpair failed and we were unable to recover it. 00:26:15.946 [2024-04-24 20:57:40.424109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.946 [2024-04-24 20:57:40.424433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.424460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.424793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.425185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.425212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.425589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.425928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.425957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.426304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.426666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.426695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.427094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.427393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.427421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.427766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.428147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.428181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.428580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.428969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.428998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.429363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.429762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.429791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.430163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.430517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.430545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.430934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.431316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.431344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.431560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.431879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.431909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.432141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.432524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.432553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.432922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.433291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.433319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.433694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.433955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.433983] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.434343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.434601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.434630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.434997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.435359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.435395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.435742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.436094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.436123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.436486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.436710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.436749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.437084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.437469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.437497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.437749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.438184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.438212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.438553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.438825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.438853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.439121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.439487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.439516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.439886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.440228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.440256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.440493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.440861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.440890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.441046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.441470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.441499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.441775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.442157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.442190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.442525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.442865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.442894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.443261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.443504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.443534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.947 [2024-04-24 20:57:40.443871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.444226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.947 [2024-04-24 20:57:40.444254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.947 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.444600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.444811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.444840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.445180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.445503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.445532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.445798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.446133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.446162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.446584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.446941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.446970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.447365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.447721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.447761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.448146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.448498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.448526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.448793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.449176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.449204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.449547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.449917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.449947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.450325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.450700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.450749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.451152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.451498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.451525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.451916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.452068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.452094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.452504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.452739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.452770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.453172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.453522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.453550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.453922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.454276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.454305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.454697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.455054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.455084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.455417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.455777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.455823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.456169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.456427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.456454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.456778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.457128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.457156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.457519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.457816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.457843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.458249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.458496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.458523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.458830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.459215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.459243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.459511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.459864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.459892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.460143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.460540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.460568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.460840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.461097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.461124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.461478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.461750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.461779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.462181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.462537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.462565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.462821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.463185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.463214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.463484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.463854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.948 [2024-04-24 20:57:40.463884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.948 qpair failed and we were unable to recover it. 00:26:15.948 [2024-04-24 20:57:40.464274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.464632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.464661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.465047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.465372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.465400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.465787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.466105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.466134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.466391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.466551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.466578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.466852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.466980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.467005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.467396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.467528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.467557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.467968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.468311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.468340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.468641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.468908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.468937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.469314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.469547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.469574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.469838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.470184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.470212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.470562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.470797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.470825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.471209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.471573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.471602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.471883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.472193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.472220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.472517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.472865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.472893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.473270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.473600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.473628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.474064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.474424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.474452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.474842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.475232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.475260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.475635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.475864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.475895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.476182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.476525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.476553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.476823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.477090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.477118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.477545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.477865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.477893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.478259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.478491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.478519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.478875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.479201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.479230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.479484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.479855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.479885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.480283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.480664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.480693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.481121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.481484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.481513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.481864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.482194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.482222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.482559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.482828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.482858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.949 qpair failed and we were unable to recover it. 00:26:15.949 [2024-04-24 20:57:40.482998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.949 [2024-04-24 20:57:40.483231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.483260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.483533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.483926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.483955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.484359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.484721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.484761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.485116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.485364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.485391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.485764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.486028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.486058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.486434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.486667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.486693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.486975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.487196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.487223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.487543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.487795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.487825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.488161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.488434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.488463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.488820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.489166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.489195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.489549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.489919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.489949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.490299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.490655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.490683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.491021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.491349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.491376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.491753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.492150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.492177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.492539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.492876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.492906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.493283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.493607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.493635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.493994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.494360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.494389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.494752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.495106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.495136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.495488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.495743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.495771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.496183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.496549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.496577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.496854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.497246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.497274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.497507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.497826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.497855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.498104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.498463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.498491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.498782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.499162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.950 [2024-04-24 20:57:40.499189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.950 qpair failed and we were unable to recover it. 00:26:15.950 [2024-04-24 20:57:40.499610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.499984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.500013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.500365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.500750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.500778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.501178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.501459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.501487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.501788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.502148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.502177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.502517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.503017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.503046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.503419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.503747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.503777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.504177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.504521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.504550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.504890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.505228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.505256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.505494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.505818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.505847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.506205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.506542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.506571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.506918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.507245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.507273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.507629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.507969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.508000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.508246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.508528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.508556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.508894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.509235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.509263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.509515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.509761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.509791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.510157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.510485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.510514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.510878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.511114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.511141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.511500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.511759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.511787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.512045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.512420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.512448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.512716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.512975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.513004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.513242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.513608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.513636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.514044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.514407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.514435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.514836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.515167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.515197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.515598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.515949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.515977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.516365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.516768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.516797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.517177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.517572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.517600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.517995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.518360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.518388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.518714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.519140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.519171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.951 qpair failed and we were unable to recover it. 00:26:15.951 [2024-04-24 20:57:40.519523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.951 [2024-04-24 20:57:40.519873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.519901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.520210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.520548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.520576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.520884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.521285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.521315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.521675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.521970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.521999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.522383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.522803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.522832] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.523181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.523562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.523590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.523980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.524313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.524342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.524697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.525096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.525126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.525512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.525846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.525877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.526249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.526610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.526645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.527008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.527386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.527415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.527791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.528069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.528096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.528351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.528599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.528626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.529049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.529407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.529436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.529816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.530207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.530237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.530603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.530844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.530872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.531228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.531443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.531470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.531766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.532140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.532169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.532533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.532942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.532971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.533240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.533562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.533596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.533974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.534205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.534232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.534573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.535000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.535029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.535356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.535574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.535604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.535968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.536356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.536385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.536755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.537156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.537184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.537459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.537850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.537880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.538287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.538608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.538637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.538994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.539349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.539376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.952 qpair failed and we were unable to recover it. 00:26:15.952 [2024-04-24 20:57:40.539641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.539922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.952 [2024-04-24 20:57:40.539949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.540334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.540680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.540711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.541141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.541472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.541497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.541864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.542255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.542280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.542669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.542800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.542831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.543229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.543491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.543517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.543914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.544210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.544234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.544616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.544885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.544914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.545263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.545596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.545624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.545799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.546142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.546169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.546536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.546866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.546896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.547291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.547616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.547651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.547885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.548253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.548283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.548633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.548966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.548997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.549381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.549772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.549804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.550071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.550305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.550335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.550769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.551027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.551059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.551359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.551747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.551779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.552220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.552551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.552581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.552981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.553347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.553376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.553638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.553994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.554025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.554291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.554624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.554653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.554843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.555150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.555180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.555515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.555888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.555920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.556307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.556690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.556719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.557149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.557516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.557546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.557926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.558194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.558226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.558498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.558830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.558862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.953 [2024-04-24 20:57:40.559255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.559531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.953 [2024-04-24 20:57:40.559560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.953 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.560027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.560271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.560300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.560572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.560811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.560841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.561205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.561562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.561592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.561883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.562153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.562183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.562541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.562805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.562836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.563188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.563517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.563546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.563910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.564243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.564273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.564660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.564925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.564955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.565305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.565546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.565574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.566063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.566246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.566273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.566522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.566861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.566891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.567282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.567675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.567703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.568075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.568247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.568280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.568709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.568974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.569005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.569246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.569591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.569620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.569932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.570309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.570339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.570629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.570788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.570819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.571170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.571537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.571566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.571956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.572294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.572324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.572705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.573051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:15.954 [2024-04-24 20:57:40.573082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:15.954 qpair failed and we were unable to recover it. 00:26:15.954 [2024-04-24 20:57:40.573464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.573906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.573940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.574100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.574603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.574633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.575012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.575376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.575406] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.575831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.576215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.576244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.576617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.576880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.576910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.577255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.577603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.577633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.578013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.578350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.578379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.578627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.578974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.579005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.579379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.579744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.579774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.580170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.580544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.580573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.580966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.581216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.581246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.581631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.581894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.581924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.582078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.582455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.582485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.582751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.583056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.583086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.583450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.583819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.583848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.584243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.584587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.584616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.584888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.585278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.585308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.585656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.586007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.586038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.586382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.586752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.586782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.587182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.587543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.587571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.587950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.588338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.223 [2024-04-24 20:57:40.588367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.223 qpair failed and we were unable to recover it. 00:26:16.223 [2024-04-24 20:57:40.588623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.588991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.589020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.589272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.589532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.589561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.589969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.590302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.590331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.590711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.591097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.591127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.591488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.591820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.591850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.592220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.592501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.592531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.592916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.593318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.593347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.593767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.594183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.594213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.594588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.594985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.595016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.595402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.595763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.595793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.596159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.596519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.596549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.596954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.597275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.597305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.597668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.598057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.598087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.598454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.598700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.598744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.599165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.599539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.599567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.599843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.600069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.600099] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.600440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.600821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.600851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.601242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.601559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.601588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.601941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.602307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.602336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.602704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.603062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.603092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.603461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.603749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.603778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.604143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.604495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.604524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.604888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.605280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.605308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.605670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.606035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.606065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.606343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.606719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.606759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.607166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.607536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.607565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.607827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.608197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.608226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.608615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.608963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.608993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.224 [2024-04-24 20:57:40.609347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.609593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.224 [2024-04-24 20:57:40.609622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.224 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.610020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.610375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.610405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.610773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.612778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.612841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.613285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.614984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.615039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.615443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.615814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.615846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.616223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.616604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.616633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.616997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.617354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.617382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.617758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.618182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.618212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.618561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.618925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.618955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.619326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.619688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.619716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.620095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.620496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.620525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.620886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.621227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.621255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.621610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.621965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.621995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.622364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.622690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.622719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.624576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.624994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.625028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.625473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.625804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.625834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.626232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.626613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.626642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.627008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.627371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.627401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.627750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.628127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.628156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.628522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.628881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.628912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.629272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.629636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.629665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.630035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.630401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.630431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.630814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.631071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.631101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.631513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.631780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.631809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.632213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.632615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.632644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.633008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.633369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.633398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.633764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.634145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.634174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.634430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.634670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.634699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.635080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.635460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.635489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.225 [2024-04-24 20:57:40.635807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.636177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.225 [2024-04-24 20:57:40.636205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.225 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.636587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.636919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.636950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.637367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.637706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.637745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.638132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.638384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.638412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.638813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.639176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.639205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.639616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.640034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.640065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.640414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.640877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.640907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.641280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.641608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.641637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.641982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.642223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.642250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.642630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.643042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.643071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.643441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.643766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.643796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.644262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.644674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.644702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.645127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.645449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.645478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.645875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.646203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.646232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.646592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.646937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.646968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.647344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.647748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.647783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.648172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.648520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.648550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.648902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.649260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.649289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.649561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.649865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.649895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.650280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.650652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.650680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.651012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.651283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.651312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.651679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.652044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.652074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.652443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.652830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.652860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.653109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.653447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.653475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.653819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.654115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.654144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.654387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.654752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.654786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.655194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.655323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.655349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.655700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.655932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.655961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.656316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.656540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.656567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.226 qpair failed and we were unable to recover it. 00:26:16.226 [2024-04-24 20:57:40.656920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.226 [2024-04-24 20:57:40.657259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.657289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.657643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.657995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.658025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.658412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.658793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.658822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.659221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.659581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.659610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.659884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.660256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.660285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.660547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.660874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.660903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.661260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.661526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.661560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.661935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.662175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.662210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.662446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.662784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.662815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.663172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.663406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.663435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.663887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.664138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.664166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.664386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.664710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.664768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.665131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.665519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.665546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.665901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.666273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.666300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.666545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.666813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.666843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.667252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.667491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.667518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.667931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.668305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.668340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.668698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.668982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.669012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.669355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.669593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.669622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.669830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.670212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.670241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.670593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.670879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.670908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.671289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.671697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.671738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.674053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.674459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.674496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.227 qpair failed and we were unable to recover it. 00:26:16.227 [2024-04-24 20:57:40.674868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.676421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.227 [2024-04-24 20:57:40.676474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.676871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.678528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.678579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.678956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.680695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.680762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.681155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.681509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.681539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.681896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.682287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.682317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.682687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.683060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.683089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.683450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.683829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.683862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.684235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.685893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.685948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.686336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.686745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.686776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.687161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.687513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.687544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.687913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.688277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.688306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.688669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.689025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.689056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.689425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.689784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.689813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.690172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.690539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.690567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.690835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.691241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.691270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.691640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.691906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.691934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.692282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.692635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.692664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.693039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.693394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.693423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.693795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.694200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.694228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.694466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.694817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.694846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.695215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.695539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.695567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.697404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.697806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.697840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.698133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.698516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.698544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.698884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.699144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.699177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.699513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.699879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.699910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.700291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.702489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.702554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.702972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.703371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.703401] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.703807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.704219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.704249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.704625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.704988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.705018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.228 qpair failed and we were unable to recover it. 00:26:16.228 [2024-04-24 20:57:40.705386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.228 [2024-04-24 20:57:40.707060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.707113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.707511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.707875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.707906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.708271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.708604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.708633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.708983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.709378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.709407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.709783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.710162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.710192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.710562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.710918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.710947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.711323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.711670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.711698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.712099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.712503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.712532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.712904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.713305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.713334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.713580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.714005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.714037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.714282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.714690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.714718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.716537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.716965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.717000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.717174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.718668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.718720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.719066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.719411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.719441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.719812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.720211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.720241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.720622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.720984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.721014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.721346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.721707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.721761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.722172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.722512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.722540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.722905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.723173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.723202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.723580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.723914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.723944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.725845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.726287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.726318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.726618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.726982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.727013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.727359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.727707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.727746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.728113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.728465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.728494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.728831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.729200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.729230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.729595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.731458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.731522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.731927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.732201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.732228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.732635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.733008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.733035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.733329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.733708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.733836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.229 qpair failed and we were unable to recover it. 00:26:16.229 [2024-04-24 20:57:40.734222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.229 [2024-04-24 20:57:40.734606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.734633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.734884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.735159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.735186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.735566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.735932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.735960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.736291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.736651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.736677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.737097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.737336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.737367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.737744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.738112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.738138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.738494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.738853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.738880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.739137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.739491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.739517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.739885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.740242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.740270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.740643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.741009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.741037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.741414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.741770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.741799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.742168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.742533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.742560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.742930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.743305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.743332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.743596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.743962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.743990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.744370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.744738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.744767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.745063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.745359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.745385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.745765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.746132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.746159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.746451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.746822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.746850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.747210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.747607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.747633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.747893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.748266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.748292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.748483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.748823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.748851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.749206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.749601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.749628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.749871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.750277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.750304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.750690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.751085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.751115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.751541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.751909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.751937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.752299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.752666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.752692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.753115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.753490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.753517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.753714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.754159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.754188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.754562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.754920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.754949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.230 qpair failed and we were unable to recover it. 00:26:16.230 [2024-04-24 20:57:40.755303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.230 [2024-04-24 20:57:40.755641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.755668] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.756048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.756366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.756393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.756758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.757112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.757139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.757511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.757856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.757883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.758269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.758514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.758541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.758924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.759267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.759294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.759677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.760017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.760045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.760414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.760796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.760826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.761089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.761491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.761517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.761906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.762264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.762290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.762664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.763027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.763055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.763297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.763653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.763681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.764072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.764413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.764439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.764878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.765232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.765258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.765616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.766025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.766053] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.766421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.766673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.766700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.767055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.767401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.767428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.767768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.768149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.768178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.768555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.768907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.768937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.769301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.769647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.769674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.770051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.770418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.770446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.770703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.771055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.771084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.771457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.771697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.771722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.772111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.772364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.772392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.772751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.773015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.773043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.773436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.773791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.773820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.774193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.774530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.774557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.774897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.775141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.775170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.775528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.775888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.775916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.231 qpair failed and we were unable to recover it. 00:26:16.231 [2024-04-24 20:57:40.776162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.231 [2024-04-24 20:57:40.776583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.776609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.777011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.777376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.777402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.777749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.778131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.778157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.778594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.778919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.778949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.779196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.779538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.779565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.779795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.780204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.780232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.780548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.780916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.780944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.781345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.781715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.781753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.782166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.782583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.782616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.782990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.783250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.783277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.783547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.783925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.783953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.784335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.784673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.784699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.785088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.785417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.785444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.785838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.786212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.786239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.786536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.786821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.786849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.787246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.787564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.787591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.788015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.788386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.788413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.788807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.789244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.789270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.789634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.789987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.790019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.790282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.790632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.790659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.791022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.791376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.791403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.791813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.792090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.792116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.792490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.792782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.792810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.793204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.793564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.793590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.794001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.794405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.794432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.232 qpair failed and we were unable to recover it. 00:26:16.232 [2024-04-24 20:57:40.794699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.232 [2024-04-24 20:57:40.794994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.795022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.795398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.795769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.795796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.796083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.796426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.796453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.796857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.797246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.797279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.797670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.798045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.798073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.798338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.798639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.798665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.799032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.799407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.799434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.799836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.800199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.800226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.800635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.800852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.800879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.801239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.801496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.801526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.801921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.802318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.802345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.802776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.803159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.803185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.803564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.803890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.803926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.804275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.804631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.804662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.804940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.805332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.805359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.805621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.806082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.806110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.806481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.806842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.806870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.807250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.807620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.807647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.808052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.808451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.808478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.808822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.809056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.809085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.809461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.809782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.809809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.810167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.810423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.810449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.810664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.811093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.811121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.811553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.811944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.811973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.812347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.812720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.812779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.813142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.813374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.813403] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.813788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.814164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.814190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.814540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.814848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.814876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.815243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.815603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.815629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.233 qpair failed and we were unable to recover it. 00:26:16.233 [2024-04-24 20:57:40.815998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.233 [2024-04-24 20:57:40.816396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.816422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.816682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.816889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.816917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.817298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.817628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.817655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.818007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.818350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.818377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.818760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.819115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.819142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.819527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.819798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.819825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.820268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.820497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.820525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.820904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.821141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.821170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.821583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.821926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.821954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.822317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.822687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.822713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.823154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.823560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.823587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.823960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.824219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.824246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.824640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.824893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.824920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.825313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.825602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.825628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.826021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.826379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.826407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.826773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.827171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.827199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.827558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.827900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.827928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.828307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.828667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.828694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.829066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.829439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.829466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.829804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.830165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.830192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.830580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.830940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.830968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.831349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.831675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.831702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.832088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.832356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.832383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.832761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.833121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.833148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.833408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.833794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.833822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.834215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.834576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.834602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.835006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.835371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.835398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.835769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.836153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.836179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.836558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.836923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.836950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.234 qpair failed and we were unable to recover it. 00:26:16.234 [2024-04-24 20:57:40.837317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.234 [2024-04-24 20:57:40.837689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.837716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.838082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.838473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.838501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.838863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.839103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.839132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.839500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.839843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.839870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.840248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.840603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.840630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.840899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.841262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.841288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.841676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.841935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.841963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.842325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.842667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.842694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.843058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.843419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.843447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.843828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.844217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.844244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.844612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.844969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.844997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.845367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.845763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.845791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.846162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.846504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.846531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.846883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.847241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.847267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.847620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.847939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.847967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.848341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.848706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.848750] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.849166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.849509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.849536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.849927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.850277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.850304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.850674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.851058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.851085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.851466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.851835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.851864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.852116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.852489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.852516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.852868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.853279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.853306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.853689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.854064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.854092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.854443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.854689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.854718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.855066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.855327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.855355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.235 [2024-04-24 20:57:40.855721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.856123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.235 [2024-04-24 20:57:40.856151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.235 qpair failed and we were unable to recover it. 00:26:16.504 [2024-04-24 20:57:40.856552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.504 [2024-04-24 20:57:40.856906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.504 [2024-04-24 20:57:40.856937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.504 qpair failed and we were unable to recover it. 00:26:16.504 [2024-04-24 20:57:40.857279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.504 [2024-04-24 20:57:40.857652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.504 [2024-04-24 20:57:40.857679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.504 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.857983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.858346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.858375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.858751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.859131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.859157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.859547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.859906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.859933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.860176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.860533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.860560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.860930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.861176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.861204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.861594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.862004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.862032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.862430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.862766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.862794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.863146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.863499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.863525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.863944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.864223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.864249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.864601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.864958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.864986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.865355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.865723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.865762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.866123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.866492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.866518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.866790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.867152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.867178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.867443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.867803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.867830] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.868199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.868426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.868455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.868814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.869175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.869201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.869583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.869921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.869948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.870308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.870662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.870689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.871079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.871479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.871506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.871878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.872223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.872250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.872621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.872970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.872998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.873366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.873739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.873766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.874127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.874500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.874527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.874885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.875252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.875279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.875637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.875985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.876014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.876403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.876752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.876780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.877017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.877400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.877428] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.877813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.878191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.505 [2024-04-24 20:57:40.878219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.505 qpair failed and we were unable to recover it. 00:26:16.505 [2024-04-24 20:57:40.878588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.878974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.879002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.879376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.879711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.879749] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.880093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.880450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.880477] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.880854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.881207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.881234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.881513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.881878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.881906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.882282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.882657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.882684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.882960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.883324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.883351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.883756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.883985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.884013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.884319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.884644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.884672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.885051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.885407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.885434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.885812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.886211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.886237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.886593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.886967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.886994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.887379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.887771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.887798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.888181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.888513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.888540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.888899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.889258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.889286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.889670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.890035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.890064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.890445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.890783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.890811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.891197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.891567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.891594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.891963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.892333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.892361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.892698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.893028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.893056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.893423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.893791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.893826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.894196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.894541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.894568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.894834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.895203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.895231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.895600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.896000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.896028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.896382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.896746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.896773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.897155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.897522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.897550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.897885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.898317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.898344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.898715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.899086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.899114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.506 qpair failed and we were unable to recover it. 00:26:16.506 [2024-04-24 20:57:40.899564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.506 [2024-04-24 20:57:40.899955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.899982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.900351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.900722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.900760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.900985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.901240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.901272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.901635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.901985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.902013] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.902385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.902744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.902772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.903141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.903492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.903519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.903886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.904227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.904253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.904636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.905001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.905029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.905298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.905702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.905740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.906096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.906449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.906478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.906836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.907203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.907230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.907604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.907963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.907991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.908357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.908738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.908772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.909158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.909543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.909570] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.909906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.910272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.910298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.910652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.910984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.911012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.911383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.911758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.911787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.912174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.912438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.912464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.912844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.913197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.913223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.913580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.913965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.913992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.914353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.914682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.914709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.915135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.915500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.915527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.915910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.916249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.916280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.916651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.917047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.917074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.917422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.917666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.917696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.918092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.918457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.918484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.918856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.919219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.919246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.919615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.919973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.920001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.507 [2024-04-24 20:57:40.920356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.920757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.507 [2024-04-24 20:57:40.920784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.507 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.921155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.921515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.921541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.921919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.922262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.922288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.922660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.923021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.923049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.923303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.923536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.923563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.923963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.924310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.924336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.924608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.924976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.925003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.925444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.925781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.925809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.926179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.926429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.926454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.926835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.927207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.927234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.927484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.927840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.927869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.928215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.928549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.928574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.928964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.929320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.929347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.929723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.930112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.930138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.930477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.930831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.930860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.931302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.931647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.931673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.932066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.932466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.932492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.932857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.933247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.933274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.933682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.933991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.934018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.934367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.934751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.934780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.935182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.935551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.935578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.935956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.936313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.936339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.936713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.937118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.937144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.937377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.937752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.937781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.938170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.938552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.938579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.938910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.939259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.939285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.939663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.940005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.940032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.940404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.940757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.940784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.508 [2024-04-24 20:57:40.941163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.941520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.508 [2024-04-24 20:57:40.941547] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.508 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.941935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.942299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.942326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.942691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.943080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.943108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.943486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.943869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.943898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.944236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.944471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.944499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.944787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.945143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.945170] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.945589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.945931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.945959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.946320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.946662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.946689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.947086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.947411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.947438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.947751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.948086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.948112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.948466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.948837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.948866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.949132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.949496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.949522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.949884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.950217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.950244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.950517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.950767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.950797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.951193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.951556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.951583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.951923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.952283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.952309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.952677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.953016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.953044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.953438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.953800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.953828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.954208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.954536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.954563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.954950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.955308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.955334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.955707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.956112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.956139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.956551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.956914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.956943] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.957318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.957659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.957685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.509 qpair failed and we were unable to recover it. 00:26:16.509 [2024-04-24 20:57:40.958140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.509 [2024-04-24 20:57:40.958510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.958537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.958905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.959251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.959278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.959655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.959986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.960012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.960371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.960740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.960768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.961144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.961524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.961550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.961924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.962171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.962201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.962572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.962898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.962926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.963296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.963651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.963678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.964036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.964382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.964408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.964791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.965189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.965217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.965602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.966019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.966047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.966416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.966776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.966812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.967169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.967526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.967553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.967920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.968257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.968284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.968685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.969061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.969088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.969471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.969810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.969837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.970096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.970492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.970519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.970869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.971235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.971262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.971634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.972009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.972038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.972442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.972763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.972791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.973202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.973521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.973548] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.973916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.974192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.974217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.974512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.974832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.974859] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.975252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.975602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.975628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.976081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.976446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.976473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.976819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.977076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.977102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.977480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.977830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.977857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.978230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.978623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.978650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.510 qpair failed and we were unable to recover it. 00:26:16.510 [2024-04-24 20:57:40.979075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.510 [2024-04-24 20:57:40.979434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.979459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.979747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.980120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.980147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.980505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.980872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.980899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.981148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.981519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.981546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.981914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.982307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.982333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.982711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.983100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.983128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.983488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.983843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.983871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.984158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.984552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.984578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.984939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.985294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.985320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.985695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.986055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.986082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.986434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.986807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.986836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.987219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.987597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.987623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.987872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.988233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.988259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.988528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.988924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.988952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.989329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.989719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.989757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.990113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.990514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.990540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.990907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.991251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.991278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.991628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.991965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.991993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.992405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.992772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.992801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.993157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.993500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.993526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.993882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.994239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.994267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.994638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.994871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.994901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.995281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.995616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.995643] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.995997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.996377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.996405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.996776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.997164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.997191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.997554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.997898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.997926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.998298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.998629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.998656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.999056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.999436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:40.999463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.511 qpair failed and we were unable to recover it. 00:26:16.511 [2024-04-24 20:57:40.999825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.511 [2024-04-24 20:57:41.000065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.000094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.000388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.000749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.000778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.001115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.001466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.001493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.001897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.002129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.002155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.002452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.002781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.002808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.003065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.003443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.003469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.003821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.004180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.004207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.004573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.004940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.004968] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.005355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.005737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.005766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.006120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.006484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.006510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.006763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.007133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.007160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.007510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.007862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.007891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.008245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.008612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.008638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.008988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.009341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.009367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.009712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.010090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.010119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.010536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.010781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.010809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.011165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.011514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.011543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.011905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.012280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.012306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.012686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.013068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.013101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.013471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.013820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.013848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.014209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.014552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.014578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.014963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.015339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.015366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.015714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.016101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.016129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.016490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.016884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.016912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.017280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.017609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.017635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.017991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.018283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.018310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.018690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.019063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.019090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.019454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.019810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.019839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.020232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.020577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.020608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.512 qpair failed and we were unable to recover it. 00:26:16.512 [2024-04-24 20:57:41.020856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.512 [2024-04-24 20:57:41.021220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.021247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.021624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.021853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.021884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.022269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.022618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.022644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.023024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.023426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.023452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.023828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.024221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.024248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.024427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.024820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.024848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.025204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.025566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.025592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.026044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.026418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.026446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.026815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.027234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.027261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.027516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.027774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.027808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.028200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.028444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.028490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.028883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.029248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.029276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.029494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.029751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.029779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.030196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.030542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.030568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.031012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.031339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.031365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.031752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.031998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.032024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.032390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.032807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.032835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.033215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.033588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.033616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.033886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.034180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.034207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.034551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.034923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.034957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.035330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.035691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.035717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.036103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.036453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.036479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.036865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.037235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.037262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.037618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.037994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.038022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.038351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.038589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.038619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.039035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.039393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.039419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.039713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.040053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.040080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.040457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.040771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.040800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.513 [2024-04-24 20:57:41.041070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.041389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.513 [2024-04-24 20:57:41.041416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.513 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.041759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.042097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.042123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.042490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.042880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.042908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.043164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.043526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.043553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.043925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.044288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.044314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.044677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.045009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.045036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.045375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.045763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.045793] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.046143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.046379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.046409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.046764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.047175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.047201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.047463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.047821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.047849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.048207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.048577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.048604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.049004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.049361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.049389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.049758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.050088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.050115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.050278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.050638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.050665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.050934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.051189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.051215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.051605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.053518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.053578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.054015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.054406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.054433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.054770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.055136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.055163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.055552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.055915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.055942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.056313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.056630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.056657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.057022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.057366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.057393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.057770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.058155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.058182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.058534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.058852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.058881] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.059288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.059561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.059587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.514 qpair failed and we were unable to recover it. 00:26:16.514 [2024-04-24 20:57:41.059985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.514 [2024-04-24 20:57:41.060246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.060273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.060564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.060908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.060937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.061323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.061691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.061719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.062091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.062318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.062344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.062717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.063080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.063108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.063463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.063814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.063844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.064232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.064567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.064593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.064982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.065317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.065343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.065721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.066074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.066101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.066465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.066698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.066740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.067172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.067526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.067552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.067812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.068045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.068072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.068438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.068800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.068828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.069214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.069538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.069565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.069958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.070315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.070341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.070792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.071170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.071198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.071565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.071915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.071944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.072326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.072667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.072693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.073094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.073342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.073369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.073716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.074127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.074154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.074529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.074785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.074812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.075191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.075529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.075555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.075924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.076326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.076352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.076716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.077171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.077198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.077575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.077927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.077955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.078333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.078701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.078739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.079107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.079350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.079376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.079754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.080167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.080195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.080570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.080939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.515 [2024-04-24 20:57:41.080967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.515 qpair failed and we were unable to recover it. 00:26:16.515 [2024-04-24 20:57:41.081304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.081659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.081685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.082047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.082286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.082312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.082566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.082928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.082956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.083306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.083670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.083698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.084076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.084415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.084441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.084824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.085173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.085199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.085567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.085962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.085990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.086358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.086718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.086757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.087131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.087488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.087515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.087925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.088289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.088316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.088564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.088884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.088913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.089270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.089502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.089532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.089917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.090168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.090194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.090574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.090942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.090970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.091324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.091612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.091639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.092010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.092363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.092390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.092766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.093175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.093201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.093565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.093931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.093959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.094330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.094697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.094734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.095129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.095468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.095495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.095864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.096223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.096249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.096629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.096971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.097000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.097329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.097704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.097741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.098102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.098455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.098482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.098838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.099213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.099240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.099615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.099981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.100009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.100381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.100794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.100822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.101193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.101600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.101626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.516 [2024-04-24 20:57:41.102004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.102357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.516 [2024-04-24 20:57:41.102390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.516 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.102766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.103142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.103168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.103538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.103814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.103842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.104201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.104557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.104584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.104831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.105223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.105249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.105607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.105950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.105978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.106364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.106707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.106743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.107110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.107469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.107495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.107851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.108240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.108265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.108660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.109005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.109042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.109431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.109790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.109817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.110140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.110497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.110524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.110909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.111259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.111285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.111645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.112008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.112035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.112407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.112773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.112801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.113174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.113532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.113558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.113905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.114289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.114315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.114790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.115122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.115148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.115526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.115894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.115921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.116295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.116624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.116651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.117045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.117400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.117425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.117773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.118185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.118213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.118515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.118878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.118905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.119288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.119637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.119663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.120064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.120409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.120435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.120675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.121079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.121107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.121489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.121862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.121892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.122267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.122602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.122630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.123011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.123247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.123273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.517 qpair failed and we were unable to recover it. 00:26:16.517 [2024-04-24 20:57:41.123535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.517 [2024-04-24 20:57:41.123903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.123931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.124294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.124626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.124652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.125030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.125425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.125454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.125830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.126170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.126198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.126565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.126920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.126948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.127335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.127711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.127748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.128039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.128417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.128443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.128808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.129220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.129246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.129605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.129948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.129984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.130345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.130682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.130708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.131085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.131448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.131474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.131853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.132228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.132254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.132624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.132967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.133001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.133367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.133693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.133721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.134096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.134437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.134464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.134860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.135241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.135268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.135662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.136037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.136065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.136442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.136792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.136820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.137098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.137454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.518 [2024-04-24 20:57:41.137480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.518 qpair failed and we were unable to recover it. 00:26:16.518 [2024-04-24 20:57:41.137747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.138117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.138147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.138403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.138772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.138801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.139155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.139515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.139542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.139801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.140141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.140174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.140516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.140871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.140899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.141286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.141653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.141680] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.142048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.142381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.142408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.142798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.143162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.143195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.143609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.143983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.144011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.144259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.144646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.144673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.145044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.145405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.145433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.145780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.146022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.146051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.146412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.146763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.146791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.147174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.147506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.147538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.147903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.148264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.148291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.148557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.148947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.148974] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.149336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.149704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.149740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.150111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.150453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.150479] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.150887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.151215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.151242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.151499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.151856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.151884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.152298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.152679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.152706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.153085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.153423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.153449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.787 [2024-04-24 20:57:41.153823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.154235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.787 [2024-04-24 20:57:41.154261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.787 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.154601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.154960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.154994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.155411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.155779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.155808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.156185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.156529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.156556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.156928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.157266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.157293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.157676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.158041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.158069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.158483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.158817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.158844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.159220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.159581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.159608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.159969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.160337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.160364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.160758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.161020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.161050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.161428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.161792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.161821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.162194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.162558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.162584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.162881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.163173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.163200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.163543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.163885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.163914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.164313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.164655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.164682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.165068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.165422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.165449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.165820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.166189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.166216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.166579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.166949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.166977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.167355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.167717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.167755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.168123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.168474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.168501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.168872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.169248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.169275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.169618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.169973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.170001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.170369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.170710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.170748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.171082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.171429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.171455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.171821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.172223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.172250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.172629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.172868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.172898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.173155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.173490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.173517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.173861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.174157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.174184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.788 [2024-04-24 20:57:41.174428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.174798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.788 [2024-04-24 20:57:41.174826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.788 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.175180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.175558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.175584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.175828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.176258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.176283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.176659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.177010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.177039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.177387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.177772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.177799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.178160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.178480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.178507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.178863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.179218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.179245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.179602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.179919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.179947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.180283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.180633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.180660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.181018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.181366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.181393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.181771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.182111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.182137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.182490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.182876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.182903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.183314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.183661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.183688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.184111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.184511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.184538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.184912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.185281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.185307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.185670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.186003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.186031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.186384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.186749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.186777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.187181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.187416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.187445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.187865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.188117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.188143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.188560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.188926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.188953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.189319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.189678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.189704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.190084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.190455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.190481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.190840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.191180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.191206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.191570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.191927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.191955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.192302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.192644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.192670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.193124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.193489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.193516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.193886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.194230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.194257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.194598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.194925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.194952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.789 qpair failed and we were unable to recover it. 00:26:16.789 [2024-04-24 20:57:41.195344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.195699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.789 [2024-04-24 20:57:41.195734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.196142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.196477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.196504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.196884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.197236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.197262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.197645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.197932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.197961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.198341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.198717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.198752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.198991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.199313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.199340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.199719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.200097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.200124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.200493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.200824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.200851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.201220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.201576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.201602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.201980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.202337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.202363] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.202741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.202990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.203016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.203417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.203798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.203825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.204193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.204563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.204589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.204957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.205317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.205344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.205719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.206111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.206138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.206512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.206855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.206883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.207250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.207582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.207609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.207978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.208356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.208382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.208761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.209128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.209154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.209510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.209869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.209896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.210267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.210636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.210662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.211014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.211408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.211434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.211826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.212071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.212097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.212457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.212721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.212764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.213138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.213477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.213503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.213877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.214244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.214270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.214622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.214950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.214978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.215218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.215493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.215519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.790 [2024-04-24 20:57:41.215871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.216265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.790 [2024-04-24 20:57:41.216291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.790 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.216661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.217019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.217047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.217416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.217749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.217777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.218116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.218473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.218499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.218874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.219239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.219267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.219518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.219910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.219937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.220216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.220571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.220598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.221006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.221376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.221402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.221767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.222126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.222155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.222370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.222745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.222774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.223110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.223456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.223483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.223761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.224143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.224169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.224543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.224920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.224948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.225310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.225654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.225681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.226122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.226476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.226503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.226868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.227296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.227322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.227663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.228019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.228047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.228393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.228753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.228782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.229145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.229510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.229537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.229916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.230204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.230230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.230561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.230925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.230952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.231330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.231745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.231773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.231996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.232387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.232413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.791 qpair failed and we were unable to recover it. 00:26:16.791 [2024-04-24 20:57:41.232778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.791 [2024-04-24 20:57:41.233164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.233190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.233572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.233928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.233956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.234340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.234732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.234760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.235150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.235524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.235551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.235900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.236168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.236194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.236549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.236913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.236941] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.237286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.237668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.237694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.237964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.238350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.238377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.238740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.239137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.239164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.239631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.240014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.240042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.240437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.240778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.240805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.241187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.241548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.241574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.241841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.242210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.242237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.242584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.242942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.242969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.243343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.243707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.243742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.244103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.244405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.244437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.244825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.245190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.245217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.245588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.245993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.246021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.246461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.246826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.246854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.247206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.247559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.247585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.247933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.248304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.248331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.248704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.249121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.249148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.249524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.249889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.249917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.250298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.250678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.250705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.251070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.251502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.251528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.251776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.252062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.252095] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.252449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.252816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.252844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.253212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.253575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.253602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.253961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.254208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.792 [2024-04-24 20:57:41.254234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.792 qpair failed and we were unable to recover it. 00:26:16.792 [2024-04-24 20:57:41.254601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.254961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.254988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.255357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.255721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.255757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.256174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.256563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.256589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.256949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.257317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.257344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.257704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.257982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.258010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.258384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.258716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.258752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.259123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.259482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.259514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.259866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.260127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.260153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.260494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.260832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.260860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.261309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.261558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.261583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.261962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.262312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.262338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.262722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.263078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.263104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.263487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.263869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.263898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.264274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.264618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.264645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.264988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.265343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.265371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.265765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.266112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.266139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.266506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.266861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.266894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.267272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.267635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.267661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.267974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.268311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.268338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.268758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.268995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.269023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.269437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.269818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.269847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.270217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.270578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.270609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.270960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.271363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.271393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.271776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.272149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.272179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.272534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.272913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.272944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.273314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.273682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.273711] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.274161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.274510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.274540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.274900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.275276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.275306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.793 [2024-04-24 20:57:41.275665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.276020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.793 [2024-04-24 20:57:41.276050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.793 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.276422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.276784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.276814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.277195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.277552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.277580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.277929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.278282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.278310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.278667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.279030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.279061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.279424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.279765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.279798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.280172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.280529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.280557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.280895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.281302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.281330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.281688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.282078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.282108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.282480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.282843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.282874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.283251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.283608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.283637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.283987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.284318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.284346] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.284681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.285083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.285113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.285470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.285716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.285756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.286127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.286557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.286586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.286929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.287277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.287307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.287556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.287917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.287947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.288307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.288650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.288678] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.289054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.289419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.289449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.289689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.290082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.290114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.290462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.290848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.290878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.291249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.291495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.291526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.291907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.292292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.292321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.292692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.293098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.293128] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.293493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.293849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.293879] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.294258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.294650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.294679] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.295107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.295478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.295508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.295854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.296228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.296257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.296608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.296835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.296868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.794 qpair failed and we were unable to recover it. 00:26:16.794 [2024-04-24 20:57:41.297244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.794 [2024-04-24 20:57:41.297598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.297628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.297972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.298323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.298353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.298738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.299128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.299156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.299520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.299921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.299952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.300329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.300561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.300589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.300903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.301252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.301282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.301552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.301908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.301937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.302314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.302673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.302701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.303095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.303462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.303491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.303864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.304251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.304279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.304672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.305071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.305101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.305445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.305795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.305826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.306191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.306552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.306582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.306921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.307288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.307316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.307705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.308111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.308140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.308524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.308869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.308900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.309256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.309617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.309646] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.310030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.310388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.310416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.310793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.311170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.311199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.311538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.311921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.311950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.312333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.312563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.312594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.312959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.313355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.313385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.313764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.314111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.314141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.314500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.314829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.314858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.315200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.315606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.315634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.315966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.316342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.316370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.316716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.317087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.317117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.317475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.317812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.317843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.318209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.318430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.795 [2024-04-24 20:57:41.318457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.795 qpair failed and we were unable to recover it. 00:26:16.795 [2024-04-24 20:57:41.318817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.319182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.319211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.319585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.319883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.319911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.320274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.320496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.320525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.320773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.321177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.321205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.321593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.321959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.321989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.322356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.322734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.322763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.323147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.323503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.323532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.323898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.324269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.324298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.324677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.325079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.325109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.325481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.325834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.325864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.326229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.326577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.326606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.326984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.327348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.327376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.327752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.328138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.328166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.328538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.328894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.328924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.329288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.329534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.329564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.329945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.330300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.330329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.330695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.331065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.331096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.331462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.331820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.331849] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.332211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.332565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.332593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.332930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.333334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.333362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.333756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.334113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.334142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.334484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.334886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.334916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.335277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.335644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.335673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.335921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.336291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.336320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.796 qpair failed and we were unable to recover it. 00:26:16.796 [2024-04-24 20:57:41.336695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.337067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.796 [2024-04-24 20:57:41.337097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.337445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.337782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.337811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.338067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.338438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.338467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.338886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.339117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.339145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.339512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.339878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.339909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.340233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.340594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.340624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.340977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.341190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.341218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.341654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.342059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.342090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.342459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.342814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.342844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.343224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.343574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.343603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.343980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.344342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.344371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.344741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.345115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.345144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.345357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.345749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.345780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.346142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.346505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.346534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.346900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.347287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.347314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.347681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.348089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.348119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.348488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.348825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.348854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.349235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.349596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.349626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.349988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.350390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.350419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.350782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.351123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.351152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.351528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.351884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.351913] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.352276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.352634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.352663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.352999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.353386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.353414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.353774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.354169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.354197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.354581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.354914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.354946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.355324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.355681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.355710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.356089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.356441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.356470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.356815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.357159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.357189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.797 [2024-04-24 20:57:41.357556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.357902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.797 [2024-04-24 20:57:41.357933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.797 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.358309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.358663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.358693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.359095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.359454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.359484] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.359859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.360215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.360245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.360618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.360981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.361011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.361352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.361756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.361787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.362129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.362478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.362508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.362857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.363233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.363262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.363646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.364047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.364076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.364456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.364813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.364848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.365257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.365603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.365631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.365996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.366226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.366259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.366650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.366969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.366999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.367383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.367752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.367783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.368149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.368506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.368535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.368986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.369349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.369379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.369760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.370015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.370046] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.370393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.370745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.370775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.371149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.371404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.371433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.371823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.372201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.372236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.372603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.372979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.373009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.373355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.373744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.373773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.374119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.374506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.374534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.374885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.375255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.375283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.375641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.375994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.376023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.376387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.376750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.376783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.377144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.377508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.377538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.377904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.378281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.378310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.378651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.379026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.379057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.798 qpair failed and we were unable to recover it. 00:26:16.798 [2024-04-24 20:57:41.379429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.798 [2024-04-24 20:57:41.379781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.379817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.380173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.380548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.380578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.380831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.381173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.381202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.381569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.381934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.381963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.382337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.382704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.382742] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.383135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.383491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.383522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.383889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.384248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.384278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.384650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.385077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.385108] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.385344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.385707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.385760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.386137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.386528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.386557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.386924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.387296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.387335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.387714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.388086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.388116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.388360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.388676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.388704] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.389066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.389435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.389464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.389833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.390203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.390231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.390599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.390992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.391022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.391401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.391766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.391810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.392101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.392480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.392509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.392882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.393126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.393158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.393509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.393887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.393918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.394299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.394657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.394685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.395090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.395455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.395485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.395836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.396212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.396240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.396618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.396983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.397012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.397382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.397746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.397775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.398135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.398508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.398536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.398903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.399240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.399269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.399687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.400058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.400089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.799 [2024-04-24 20:57:41.400484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.400853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.799 [2024-04-24 20:57:41.400882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.799 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.401263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.401626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.401655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.402021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.402390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.402419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.402791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.403175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.403203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.403575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.403902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.403932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.404293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.404655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.404683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.405063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.405412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.405441] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.405827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.406185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.406214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.406573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.406922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.406953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.407311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.407660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.407689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.408087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.408441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.408471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.408850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.409217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.409247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.409622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.409991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.410022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.410462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.410830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.410860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.411227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.411553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.411582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.411976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.412330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.412362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.413199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.413614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.413644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.414016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.414382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.414409] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.414781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.415041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.415068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.415482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.415859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.415888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.416232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.416569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.416595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.417018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.417379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.417405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.417779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.418160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.418187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.418434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.418781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.418810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.419289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.419680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:16.800 [2024-04-24 20:57:41.419706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:16.800 qpair failed and we were unable to recover it. 00:26:16.800 [2024-04-24 20:57:41.420158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-24 20:57:41.420522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-24 20:57:41.420552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-24 20:57:41.420934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-24 20:57:41.421295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-24 20:57:41.421322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-24 20:57:41.421683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-24 20:57:41.422055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-24 20:57:41.422084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-24 20:57:41.422461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-24 20:57:41.422692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-24 20:57:41.422719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.069 [2024-04-24 20:57:41.422991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-24 20:57:41.423319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.069 [2024-04-24 20:57:41.423347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.069 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.423715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.424086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.424113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.424364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.424750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.424778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.425182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.425435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.425463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.425838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.426196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.426222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.426595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.426934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.426961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.427243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.427576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.427602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.427911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.428188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.428214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.428570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.428939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.428966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.429335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.429696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.429723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.430171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.430587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.430613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.430999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.431374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.431400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.431846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.432228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.432254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.432628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.432983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.433012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.433389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.433776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.433803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.434071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.434434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.434460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.434752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.435134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.435160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.435532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.435938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.435965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.436306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.436544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.436577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.437000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.437352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.437379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.437582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.437849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.437877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.438210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.438554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.438580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.438933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.439293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.439320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.439691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.440060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.440088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.440455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.440696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.440722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.440979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.441350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.441376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.441771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.442161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.442187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.442536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.442688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.442718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.443073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.443444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.443471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.070 qpair failed and we were unable to recover it. 00:26:17.070 [2024-04-24 20:57:41.443844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.070 [2024-04-24 20:57:41.444193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.444220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.444563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.444928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.444955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.445337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.445702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.445759] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.446135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.446484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.446510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.446879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.447247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.447274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.447635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.447975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.448002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.448378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.448655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.448681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.449044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.449414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.449440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.449812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.450165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.450191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.450566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.450927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.450956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.451388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.451722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.451758] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.452017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.452331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.452357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.452717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.453062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.453089] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.453456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.453812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.453839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.454220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.454563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.454589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.454950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.455312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.455338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.455747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.456143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.456169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.456598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.456967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.456994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.457354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.457705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.457739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.458047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.458396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.458422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.458779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.459145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.459172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.459549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.459906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.459933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.460311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.460665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.460691] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.461061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.461427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.461454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.461818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.462189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.462216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.462593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.462966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.462995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.463332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.463661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.463687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.464102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.464473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.464501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.071 qpair failed and we were unable to recover it. 00:26:17.071 [2024-04-24 20:57:41.464875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.071 [2024-04-24 20:57:41.465222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.465248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.465603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.465920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.465947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.466293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.466657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.466683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.467142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.467498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.467525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.467895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.468305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.468332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.468697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.469058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.469087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.469448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.469788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.469816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.470204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.470553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.470580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.470821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.471206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.471232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.471582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.471917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.471945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.472356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.472710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.472746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.473086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.473447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.473474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.473852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.474295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.474322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.474705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.474990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.475017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.475378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.475750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.475778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.476135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.476470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.476496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.476866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.477229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.477256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.477623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.477975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.478002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.478410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.478652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.478677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.479080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.479479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.479506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.479705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.480105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.480133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.480395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.480753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.480781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.481155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.481497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.481523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.481893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.482263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.482290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.482658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.482991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.483019] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.483382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.483752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.483780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.484146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.484506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.484532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.484903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.485268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.485302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.072 qpair failed and we were unable to recover it. 00:26:17.072 [2024-04-24 20:57:41.485667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.486001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.072 [2024-04-24 20:57:41.486030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.486430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.486798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.486826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.487211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.487572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.487598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.487877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.488278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.488305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.488719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.489104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.489131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.489529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.489898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.489927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.490292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.490662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.490689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.491095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.491448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.491475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.491822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.492202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.492229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.492590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.492924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.492958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.493415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.493756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.493784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.494239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.494582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.494608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.494963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.495319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.495345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.495716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.496098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.496126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.496510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.496890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.496918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.497291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.497623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.497649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.497932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.498293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.498319] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.498702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.499064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.499090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.499460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.499804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.499831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.500218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.500570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.500602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.500971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.501288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.501316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.501684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.502082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.502110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.502371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.502637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.502665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.503016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.503386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.503414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.503822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.504174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.504201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.504594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.504952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.504980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.505338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.505710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.505746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.506058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.506399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.506425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.073 [2024-04-24 20:57:41.506806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.507169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.073 [2024-04-24 20:57:41.507195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.073 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.507552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.507912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.507940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.508210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.508438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.508466] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.508821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.509181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.509207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.509580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.509929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.509957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.510336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.510712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.510762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.511167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.511507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.511533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.511906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.512259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.512287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.512653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.513010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.513039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.513407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.513764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.513791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.514137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.514478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.514504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.514875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.515227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.515253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.515681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.515949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.515980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.516231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.516552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.516580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.516960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.517305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.517331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.517705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.518044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.518072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.518462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.518824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.518853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.519146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.519479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.519507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.519884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.520258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.520284] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.520650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.521005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.521032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.521402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.521755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.521783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.522165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.522509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.522536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.522930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.523259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.523285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.523665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.524075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.074 [2024-04-24 20:57:41.524102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.074 qpair failed and we were unable to recover it. 00:26:17.074 [2024-04-24 20:57:41.524420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.524780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.524807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.525201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.525486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.525512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.525873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.526083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.526109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.526455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.526811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.526840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.527194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.527605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.527632] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.528025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.528364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.528391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.528765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.529119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.529145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.529513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.529856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.529884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.530257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.530617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.530644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.530993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.531339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.531366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.531735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.531966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.531996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.532377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.532716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.532760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.533141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.533502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.533529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.533882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.534286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.534312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.534653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.534933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.534962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.535312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.535632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.535658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.536014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.536372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.536399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.536778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.537052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.537078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.537449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.537812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.537841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.538187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.538519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.538546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.538863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.539226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.539253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.539618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.539978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.540005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.540392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.540744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.540771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.541137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.541484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.541510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.541883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.542245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.542272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.542641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.543042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.543070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.543452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.543802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.543829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.544226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.544587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.544614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.075 [2024-04-24 20:57:41.544965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.545313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.075 [2024-04-24 20:57:41.545339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.075 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.545701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.545950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.545978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.546335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.546694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.546721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.547013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.547363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.547390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.547750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.548103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.548129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.548499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.548721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.548762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.549196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.549423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.549449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.549722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.550136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.550163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.550536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.550870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.550898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.551278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.551655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.551683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.552072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.552433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.552459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.552835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.553173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.553200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.553555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.553906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.553933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.554314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.554666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.554692] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.555080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.555428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.555454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.555803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.556180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.556207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.556485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.556890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.556918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.557286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.557641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.557666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.557902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.558284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.558310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.558673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.559070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.559097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.559350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.559781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.559810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.560186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.560547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.560573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.560942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.561311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.561339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.561685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.561918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.561948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.562362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.562704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.562748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.563085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.563454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.563480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.563843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.564200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.564227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.564603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.564830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.564858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.565251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.565595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.565621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.076 qpair failed and we were unable to recover it. 00:26:17.076 [2024-04-24 20:57:41.566010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.076 [2024-04-24 20:57:41.566372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.566398] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.566760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.567127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.567154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.567528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.567874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.567901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.568165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.568480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.568507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.568854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.569107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.569133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.569535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.569891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.569918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.570174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.570447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.570473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.570851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.571230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.571256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.571616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.571847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.571874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.572263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.572605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.572631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.573003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.573234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.573265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.573609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.573990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.574017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.574256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.574627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.574654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.575031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.575390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.575418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.575673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.576029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.576057] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.576431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.576808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.576836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.577196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.577608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.577635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.577988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.578337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.578364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.578763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.579122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.579150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.579477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.579829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.579856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.580213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.580579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.580605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.580928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.581297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.581324] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.581700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.582036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.582063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.582311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.582635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.582661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.583049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.583398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.583424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.583784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.584206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.584232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.584629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.584996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.585024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.585393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.585713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.585748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.586080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.586444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.077 [2024-04-24 20:57:41.586470] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.077 qpair failed and we were unable to recover it. 00:26:17.077 [2024-04-24 20:57:41.586853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.587228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.587255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.587611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.587857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.587888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.588282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.588682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.588709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.589096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.589450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.589476] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.589847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.590216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.590242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.590614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.590903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.590931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.591309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.591545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.591577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.591864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.592225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.592253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.592611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.592980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.593009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.593364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.593736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.593766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.594044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.594351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.594381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.594746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.595116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.595144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.595519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.595887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.595922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.596283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.596631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.596657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.597003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.597366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.597393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.597755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.598162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.598188] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.598570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.598909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.598936] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.599213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.599595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.599621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.599916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.600305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.600331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.600689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.601095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.601124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.601508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.601867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.601895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.602273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.602626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.602654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.603029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.603353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.603385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.603771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.604140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.604166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.604411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.604770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.604798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.605171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.605554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.605581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.605872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.606257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.606283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.078 qpair failed and we were unable to recover it. 00:26:17.078 [2024-04-24 20:57:41.606640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.607003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.078 [2024-04-24 20:57:41.607031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.607397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.607760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.607790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.608162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.608505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.608532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.608864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.609216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.609243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.609625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.609996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.610024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.610390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.610751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.610783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.611143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.611346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.611375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.611615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.611977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.612005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.612381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.612747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.612774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.613119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.613482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.613509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.613881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.614246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.614274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.614617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.614988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.615016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.615390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.615752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.615779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.616147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.616500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.616529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.616881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.617224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.617252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.617481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.617883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.617917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.618327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.618690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.618717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.619020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.619381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.619408] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.619770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.620107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.620133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.620505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.620862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.620890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.621234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.621650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.621676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.622059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.622287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.622315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.622677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.623049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.623077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.623447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.623806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.623834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.624228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.624588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.624616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.624871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.625244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.079 [2024-04-24 20:57:41.625272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.079 qpair failed and we were unable to recover it. 00:26:17.079 [2024-04-24 20:57:41.625638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.626013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.626041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.626414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.626777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.626807] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.627194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.627561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.627589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.627962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.628301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.628329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.628700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.629087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.629116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.629366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.629744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.629773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.630152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.630539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.630566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.630853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.631121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.631147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.631519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.631892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.631922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.632360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.632708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.632746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.633130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.633350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.633381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.633768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.633989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.634018] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.634422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.634766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.634794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.635164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.635508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.635535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.635930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.636155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.636183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.636450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.636840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.636869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.637227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.637597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.637623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.637993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.638345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.638371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.638758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.639126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.639154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.639532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.639888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.639916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.640306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.640635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.640662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.641042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.641393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.641421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.641796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.642184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.642211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.642558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.642907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.642934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.643290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.643663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.643689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.644104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.644355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.644382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.644794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.645151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.645177] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.645561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.645889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.645916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.080 qpair failed and we were unable to recover it. 00:26:17.080 [2024-04-24 20:57:41.646297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.080 [2024-04-24 20:57:41.646660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.646687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.647086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.647435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.647461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.647855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.648239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.648266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.648641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.649000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.649029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.649384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.649745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.649773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.650013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.650353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.650380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.650785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.651050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.651076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.651457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.651792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.651819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.652073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.652383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.652410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.652771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.653100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.653126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.653471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.653740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.653768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.654048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.654401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.654427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.654684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.655088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.655117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.655459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.655813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.655840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.656257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.656618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.656645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.657069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.657408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.657435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.657820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.658176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.658203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.658568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.658923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.658951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.659242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.659611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.659638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.659993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.660366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.660392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.660771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.661173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.661200] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.661569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.661793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.661822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.662200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.662550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.662576] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.663004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.663236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.663262] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.663672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.663872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.663899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.664168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.664494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.664521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.664914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.665189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.665215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.665590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.665937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.665964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.666339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.666701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.081 [2024-04-24 20:57:41.666735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.081 qpair failed and we were unable to recover it. 00:26:17.081 [2024-04-24 20:57:41.667094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.667514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.667540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.667938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.668162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.668190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.668565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.668909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.668937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.669311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.669637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.669664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.670071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.670446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.670473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.670853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.671195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.671223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.671586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.671946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.671973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.672358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.672720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.672773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.673173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.673515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.673541] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.673900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.674212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.674238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.674644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.675005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.675032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.675387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.675754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.675781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.676136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.676392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.676418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.676783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.677155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.677183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.677446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.677855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.677883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.678314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.678643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.678670] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.679048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.679407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.679434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.679706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.680097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.680124] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.680482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.680895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.680923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.681295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.681657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.681683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.682065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.682425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.682453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.682831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.683188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.683215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.683591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.683951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.683979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.684357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.684734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.684763] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.685123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.685491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.685517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.685903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.686271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.686297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.686739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.687126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.687152] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.687502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.687839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.082 [2024-04-24 20:57:41.687866] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.082 qpair failed and we were unable to recover it. 00:26:17.082 [2024-04-24 20:57:41.688255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.688643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.688669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.689005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.689367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.689393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.689641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.690042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.690070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.690425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.690800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.690829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.691198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.691540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.691567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.691915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.692275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.692302] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.692665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.692929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.692957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.693221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.693594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.693620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.693970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.694327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.694354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.694709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.695081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.695109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.695457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.695814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.695842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.696217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.696568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.696594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.696970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.697327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.697355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.697739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.698104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.698130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.698507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.698874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.698901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.699270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.699603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.699630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.699923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.700286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.700314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.700598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.700954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.700982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.701227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.701530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.701557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.083 [2024-04-24 20:57:41.701923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.702277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.083 [2024-04-24 20:57:41.702304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.083 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-24 20:57:41.702645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-24 20:57:41.703015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-24 20:57:41.703043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-24 20:57:41.703419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-24 20:57:41.703770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-24 20:57:41.703813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-24 20:57:41.704262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-24 20:57:41.704623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-24 20:57:41.704651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-24 20:57:41.705025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-24 20:57:41.705377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-24 20:57:41.705405] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.352 qpair failed and we were unable to recover it. 00:26:17.352 [2024-04-24 20:57:41.705763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.352 [2024-04-24 20:57:41.705991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.706020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.706279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.706626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.706654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.707005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.707356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.707383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.707749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.708115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.708142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.708510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.708865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.708894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.709273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.709628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.709656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.710033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.710390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.710417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.710793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.711153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.711181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.711603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.711978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.712006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.712394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.712719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.712757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.713049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.713390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.713417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.713791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.714151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.714183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.714538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.714898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.714925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.715281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.715645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.715671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.716086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.716411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.716437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.716828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.717187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.717214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.717484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.717872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.717900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.718290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.718542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.718568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.718811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.719157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.719184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.719574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.719934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.719961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.720336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.720699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.720738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.720994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.721368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.721400] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.721789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.722138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.722165] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.722517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.722809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.722835] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.723198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.723542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.723568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.353 qpair failed and we were unable to recover it. 00:26:17.353 [2024-04-24 20:57:41.723918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.724282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.353 [2024-04-24 20:57:41.724308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.724764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.725139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.725166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.725538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.725900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.725928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.726310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.726671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.726697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.727076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.727414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.727440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.727817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.728193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.728219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.728455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.728763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.728797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.729086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.729408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.729434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.729829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.730208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.730234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.730599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.730955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.730982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.731344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.731699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.731734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.732061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.732422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.732449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.732866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.733201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.733228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.733611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.733968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.733997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.734378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.734585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.734614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.734990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.735422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.735449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.735737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.735896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.735932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.736309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.736678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.736705] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.737112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.737454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.737481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.737855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.738199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.738225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.738583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.738943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.738970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.739333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.739690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.739717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.739954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.740341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.740369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.740758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.741184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.741211] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.741502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.741846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.741874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.742234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.742589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.742616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.742993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.743354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.743382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.354 [2024-04-24 20:57:41.743758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.744101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.354 [2024-04-24 20:57:41.744127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.354 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.744518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.744865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.744893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.745255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.745609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.745636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.745996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.746248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.746274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.746562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.746925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.746953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.747322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.747681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.747708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.748075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.748422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.748449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.748717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.749111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.749138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.749503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.749868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.749897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.750268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.750615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.750642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.750999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.751350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.751376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.751743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.752080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.752106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.752473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.752819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.752847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.753210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.753618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.753644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.754025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.754406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.754431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.754806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.755032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.755061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.755441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.755719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.755755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.756148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.756490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.756516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.756886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.757262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.757288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.757696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.758043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.758073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.758335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.758579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.758608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.758981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.759313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.759340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.759789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.760062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.760088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.760507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.760865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.760892] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.761272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.761605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.761631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.761913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.762259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.355 [2024-04-24 20:57:41.762286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.355 qpair failed and we were unable to recover it. 00:26:17.355 [2024-04-24 20:57:41.762643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.763006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.763034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.763274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.763633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.763660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.764049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.764462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.764489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.764862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.765211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.765237] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.765613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.765919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.765946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.766319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.766555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.766581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.766840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.767087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.767113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.767530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.767874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.767901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.768290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.768578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.768605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.768974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.769327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.769355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.769776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.770106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.770133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.770493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.770859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.770887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.771303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.771669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.771696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.772060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.772405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.772432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.772811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.773170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.773196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.773563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.773915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.773942] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.774308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.774676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.774703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.775064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.775435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.775463] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.775817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.776186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.776213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.776606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.776960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.776987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.777360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.777683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.777710] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.778068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.778407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.778435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.778809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.779177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.779203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.779538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.779906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.779933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.780213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.780594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.780620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.780856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.781233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.781259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.781570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.781934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.781962] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.782380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.782707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.782743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.356 [2024-04-24 20:57:41.783087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.783447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.356 [2024-04-24 20:57:41.783474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.356 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.783843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.784079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.784109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.784494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.784867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.784894] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.785311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.785670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.785697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.786076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.786435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.786461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.786839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.787241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.787267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.787616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.787974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.788002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.788369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.788751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.788779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.789177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.789531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.789557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.789920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.790290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.790317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.790673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.791049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.791077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.791455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.791705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.791743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.792084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.792409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.792435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.792655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.793009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.793037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.793410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.793773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.793800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.794062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.794455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.794482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.794857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.795218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.795245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.795506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.795896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.795923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.796299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.796658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.796684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.797142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.797489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.797516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.797858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.798210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.798236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.798584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.798837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.798865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.799243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.799607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.799634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.357 [2024-04-24 20:57:41.799985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.800327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.357 [2024-04-24 20:57:41.800353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.357 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.800750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.801141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.801168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.801399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.801760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.801788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.802217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.802577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.802603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.802997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.803348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.803376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.803776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.804006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.804034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.804390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.804758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.804786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.805167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.805534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.805560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.805922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.806290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.806316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.806463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.806822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.806850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.807223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.807571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.807597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.808006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.808270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.808296] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.808670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.809041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.809068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.809446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.809800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.809829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.810210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.810567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.810594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.810968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.811212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.811238] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.811607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.811971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.811998] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.812372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.812764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.812792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.813070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.813421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.813448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.813805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.814172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.814198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.814555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.814952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.814981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.815363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.815704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.815740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.816069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.816439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.816465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.816690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.817073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.817101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.817359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.817577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.817605] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.818017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.818386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.818413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.818860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.819230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.819257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.819633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.819982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.820009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.820234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.820581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.820607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.358 [2024-04-24 20:57:41.821009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.821393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.358 [2024-04-24 20:57:41.821422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.358 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.821787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.822146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.822172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.822542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.822905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.822931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.823346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.823703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.823740] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.824089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.824440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.824467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.824840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.825183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.825209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.825593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.825958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.825987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.826355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.826710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.826747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.827087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.827434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.827460] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.827818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.828101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.828127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.828382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.828756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.828783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.829206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.829580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.829607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.830052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.830396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.830423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.830823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.831203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.831229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.831594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.831960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.831994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.832336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.832667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.832694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.832962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.833333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.833359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.833747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.834109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.834134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.834484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.834845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.834873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.835242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.835606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.835634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.836026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.836383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.836410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.836784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.837146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.837172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.837559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.837948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.837976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.838354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.838705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.838743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.839094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.839449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.839480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.839830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.840192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.840218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.840603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.840965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.840993] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.841392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.841745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.841774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.842181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.842534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.842560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.359 qpair failed and we were unable to recover it. 00:26:17.359 [2024-04-24 20:57:41.842798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.359 [2024-04-24 20:57:41.843249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.843276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.843488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.843829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.843858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.844281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.844637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.844665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.845043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.845391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.845418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.845792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.846141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.846166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.846541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.846920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.846954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.847404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.847760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.847790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.848171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.848525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.848552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.848798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.848963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.848989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.849294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.849657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.849685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.850084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.850484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.850510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.850871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.851224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.851251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.851539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.851937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.851964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.852315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.852695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.852721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.853148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.853496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.853522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.853906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.854280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.854307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.854721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.855109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.855136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.855513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.855855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.855883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.856230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.856563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.856590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.856953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.857321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.857348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.857697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.858062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.858090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.858458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.858825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.858853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.859239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.859601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.859628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.859977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.860330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.860356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.860739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.861119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.861145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.861535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.861899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.861926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.862312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.862660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.862686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.863069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.863445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.863472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.863853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.864242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.360 [2024-04-24 20:57:41.864269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.360 qpair failed and we were unable to recover it. 00:26:17.360 [2024-04-24 20:57:41.864662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.865007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.865035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.865412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.865667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.865694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.865979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.866341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.866367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.866735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.867080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.867107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.867482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.867855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.867883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.868261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.868556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.868583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.868961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.869314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.869340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.869604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.869999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.870026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.870397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.870766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.870800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.871175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.871414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.871443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.871834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.872190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.872216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.872587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.872957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.872984] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.873352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.873720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.873757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.874136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.874498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.874526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.874906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.875261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.875288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.875672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.876043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.876071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.876437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.876795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.876823] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.877191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.877559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.877586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.877960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.878360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.878387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.878759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.879015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.879042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.879431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.879791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.879819] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.880203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.880534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.880561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.880952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.881326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.881353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.881737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.882089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.882117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.882481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.882855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.882883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.883248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.883607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.883634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.884036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.884397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.884425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.884784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.885133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.885160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.361 [2024-04-24 20:57:41.885534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.885878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.361 [2024-04-24 20:57:41.885906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.361 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.886278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.886659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.886685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.886938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.887176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.887204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.887555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.887917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.887945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.888296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.888628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.888654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.889043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.889426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.889453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.889831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.890196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.890223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.890591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.890882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.890909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.891285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.891649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.891677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.892059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.892395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.892421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.892801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.893152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.893179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.893617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.893982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.894009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.894380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.894745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.894772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.895117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.895476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.895502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.895901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.896260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.896287] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.896654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.897012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.897040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.897399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.897752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.897780] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.898168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.898512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.898538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.898912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.899204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.899230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.899601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.899965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.899992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.900340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.900569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.900597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.900983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.901337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.901365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.901720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.902104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.902131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.902500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.902869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.902897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.903256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.903608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.903634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.362 qpair failed and we were unable to recover it. 00:26:17.362 [2024-04-24 20:57:41.904074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.362 [2024-04-24 20:57:41.904427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.904454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.904834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.905199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.905225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.905632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.906007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.906035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.906408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.906661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.906687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.907062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.907417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.907444] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.907826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.908198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.908224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.908596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.908968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.908997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.909250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.909599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.909626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.910004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.910357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.910383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.910833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.911178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.911204] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.911571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.911974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.912001] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.912215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.912580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.912606] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.913005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.913371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.913399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.913751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.914111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.914138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.914512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.914875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.914902] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.915267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.915637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.915663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.916025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.916385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.916412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.916797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.917084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.917110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.917466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.917817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.917845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.918237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.918602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.918628] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.918984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.919344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.919372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.919750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.920156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.920182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.920526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.920899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.920929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.921292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.921644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.921671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.922097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.922468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.922495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.922886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.923252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.923279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.923644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.923881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.923908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.924160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.924510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.924537] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.924884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.925242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.925269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.363 qpair failed and we were unable to recover it. 00:26:17.363 [2024-04-24 20:57:41.925692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.363 [2024-04-24 20:57:41.926095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.926123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.926368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.926688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.926715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.927054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.927341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.927367] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.927740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.928102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.928129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.928505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.928871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.928900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.929244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.929602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.929629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.930001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.930240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.930269] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.930660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.930929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.930956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.931340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.931700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.931738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.932077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.932445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.932472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.932857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.933233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.933261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.933510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.933864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.933891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.934280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.934637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.934664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.935029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.935389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.935415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.935799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.936153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.936181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.936560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.937037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.937066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.937433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.937794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.937822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.938194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.938543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.938571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.938934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.939291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.939317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.939612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.939832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.939863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.940231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.940573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.940600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.940960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.941331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.941358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.941709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.942072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.942100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.942438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.942794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.942822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.943101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.943503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.943529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.943904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.944227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.944259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.944614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.944969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.944997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.945386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.945735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.945762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.946155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.946397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.946423] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.364 qpair failed and we were unable to recover it. 00:26:17.364 [2024-04-24 20:57:41.946790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.364 [2024-04-24 20:57:41.947148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.947176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.947552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.947903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.947931] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.948341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.948706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.948744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.949117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.949467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.949493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.949903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.950272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.950300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.950669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.951043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.951070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.951423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.951768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.951802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.952193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.952527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.952553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.952787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.953165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.953192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.953565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.953917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.953946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.954322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.954655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.954682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.955062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.955390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.955417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.955851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.956217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.956243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.956605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.956943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.956971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.957225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.957589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.957615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.957972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.958370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.958397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.958758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.959126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.959159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.959516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.959879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.959907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.960271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.960639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.960666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.960892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.961289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.961316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.961686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.962091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.962119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.962496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.962856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.962883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.963253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.963592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.963618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.963851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.964225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.964252] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.964498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.964858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.964887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.965244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.965610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.965637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.966020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.966382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.966415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.966790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.967160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.967187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.967552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.968014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.968045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.365 qpair failed and we were unable to recover it. 00:26:17.365 [2024-04-24 20:57:41.968424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.365 [2024-04-24 20:57:41.968776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.968812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.969062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.969311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.969337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.969737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.970115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.970142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.970516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.970868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.970896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.971278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.971662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.971689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.972076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.972438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.972465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.972844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.973238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.973265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.973624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.973969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.973997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.974413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.974769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.974799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.975146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.975501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.975528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.975889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.976232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.976259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.976467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.976786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.976814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.977229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.977609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.977635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.978072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.978439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.978465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.978809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.979178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.979206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.979575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.979935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.979963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.980335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.980696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.980723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.981104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.981451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.981478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.981798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.982175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.982202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.982574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.982911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.982940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.983314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.983563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.983589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.983999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.984366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.984394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.984772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.985087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.366 [2024-04-24 20:57:41.985115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.366 qpair failed and we were unable to recover it. 00:26:17.366 [2024-04-24 20:57:41.985464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.985806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.985840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-24 20:57:41.986199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.986535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.986562] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-24 20:57:41.986940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.987303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.987331] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-24 20:57:41.987702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.988084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.988112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-24 20:57:41.988483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.988867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.988895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-24 20:57:41.989262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.989612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.989639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-24 20:57:41.989996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.990356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.990383] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-24 20:57:41.990764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.991095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.991123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-24 20:57:41.991474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.991841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.991869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-24 20:57:41.992223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.992587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.992613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.636 [2024-04-24 20:57:41.992999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.993400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.636 [2024-04-24 20:57:41.993427] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.636 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:41.993863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.994090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.994118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:41.994499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.994862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.994890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:41.995258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.995616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.995642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:41.996027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.996393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.996420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:41.996786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.997124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.997151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:41.997532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.997797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.997827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:41.998208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.998566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.998592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:41.999000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.999342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:41.999370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:41.999750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.000118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.000144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.000515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.000845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.000873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.001317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.001578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.001604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.001955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.002207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.002234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.002621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.003034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.003062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.003319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.003675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.003702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.004098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.004476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.004503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.004847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.005197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.005223] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.005669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.006033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.006060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.006406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.006762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.006790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.007155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.007507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.007533] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.007802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.008202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.008229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.008668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.008907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.008937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.009310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.009744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.009773] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.010145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.010386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.010413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.010800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.011183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.011210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.011577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.011923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.011950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.012308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.012702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.012738] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.013120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.013475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.013502] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.013848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.014273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.014299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.637 qpair failed and we were unable to recover it. 00:26:17.637 [2024-04-24 20:57:42.014678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.637 [2024-04-24 20:57:42.014987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.015015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.015387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.015757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.015800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.016078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.016480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.016506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.016874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.017293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.017320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.017701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.018099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.018126] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.018492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.018817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.018845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.019216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.019587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.019613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.019961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.020313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.020341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.020743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.021109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.021136] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.021509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.021854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.021882] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.022139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.022508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.022535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.022922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.023287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.023313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.023714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.024057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.024084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.024349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.024740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.024768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.025177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.025552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.025579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.025845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.026207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.026241] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.026606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.026989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.027017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.027466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.027784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.027812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.028176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.028563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.028589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.028951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.029187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.029216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.029567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.029985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.030012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.030352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.030694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.030720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.031157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.031499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.031526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.031903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.032257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.032283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.032640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.032978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.033005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.033375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.033744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.033771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.034172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.034546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.034575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.034913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.035163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.035190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.035571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.035922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.035949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.638 qpair failed and we were unable to recover it. 00:26:17.638 [2024-04-24 20:57:42.036320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.638 [2024-04-24 20:57:42.036668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.036694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.037098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.037484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.037510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.037756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.038114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.038141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.038416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.038780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.038808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.039203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.039462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.039488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.039756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.040129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.040157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.040581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.040877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.040905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.041276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.041612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.041639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.041876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.042237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.042264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.042640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.043010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.043040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.043409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.043746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.043775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.044123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.044473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.044500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.044872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.045216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.045242] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.045593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.045953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.045980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.046361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.046720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.046760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.047135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.047469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.047496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.047767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.048166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.048192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.048548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.048916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.048945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.049318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.049680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.049707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.050106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.050419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.050445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.050817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.051159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.051185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.051568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.051841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.051869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.052248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.052633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.052660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.053048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.053405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.053432] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.053698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.054111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.054139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.054393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.054685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.054713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.055128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.055486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.055514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.055887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.056283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.056310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.639 [2024-04-24 20:57:42.056681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.056922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.639 [2024-04-24 20:57:42.056953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.639 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.057298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.057682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.057709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.058075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.058440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.058467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.058844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.059215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.059240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.059693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.060102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.060129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.060395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.060633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.060659] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.060997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.061375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.061402] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.061772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.062160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.062187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.062527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.062892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.062920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.063284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.063651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.063683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.063958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.064323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.064350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.064613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.065032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.065060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.065414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.065785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.065813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.066220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.066575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.066603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.066966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.067317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.067343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.067765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.068157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.068185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.068562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.068921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.068950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.069299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.069637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.069663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.070044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.070393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.070419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.070662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.071018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.071052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.071388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.071763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.071792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.072165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.072414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.072440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.072823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.073227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.073253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.073637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.073981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.074008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.074319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.074690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.074716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.075082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.075449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.075475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.075847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.076217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.076245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.076627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.076988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.077016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.077346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.077720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.077757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.640 qpair failed and we were unable to recover it. 00:26:17.640 [2024-04-24 20:57:42.077980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.640 [2024-04-24 20:57:42.078341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.078374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.078782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.079156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.079184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.079559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.079810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.079838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.080188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.080535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.080561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.081020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.081263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.081289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.081673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.082044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.082072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.082438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.082793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.082820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.083202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.083489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.083515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.083881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.084021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.084052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.084304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.084706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.084744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.085147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.085364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.085404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.085788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.086147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.086174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.086545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.086886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.086914] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.087299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.087530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.087557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.087946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.088291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.088318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.088698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.089049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.089076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.089437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.089798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.089826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.090088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.090494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.090521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.090914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.091281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.091308] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.091665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.092051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.092079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.092366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.092722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.092761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.093157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.093529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.093556] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.093938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.094167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.094197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.641 qpair failed and we were unable to recover it. 00:26:17.641 [2024-04-24 20:57:42.094546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.641 [2024-04-24 20:57:42.094922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.094950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.095317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.095689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.095715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.096097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.096447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.096474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.096848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.097202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.097229] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.097683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.098073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.098102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.098537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.098938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.098966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.099349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.099716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.099761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.100176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.100516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.100542] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.100793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.101089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.101117] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.101474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.101828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.101855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.102229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.102581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.102609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.102967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.103320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.103347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.103714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.103950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.103979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.104362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.104736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.104765] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.105154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.105490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.105517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.105898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.106229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.106256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.106665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.107027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.107055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.107428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.107783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.107812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.108185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.108547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.108574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.108944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.109188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.109217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.109584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.109918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.109945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.110309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.110703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.110743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.110934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.111327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.111354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.111707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.112105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.112132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.112499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.112840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.112869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.113262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.113594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.113622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.113971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.114325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.114352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.114602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.114952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.114980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.115367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.115734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.642 [2024-04-24 20:57:42.115762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.642 qpair failed and we were unable to recover it. 00:26:17.642 [2024-04-24 20:57:42.116149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.116388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.116414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.116819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.117084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.117110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.117483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.117821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.117848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.118201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.118580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.118607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.118978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.119349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.119376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.119767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.120156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.120192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.120567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.120915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.120944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.121312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.121676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.121703] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.122170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.122513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.122539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.122811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.123169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.123196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.123556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.123986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.124014] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.124376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.124748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.124777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.125164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.125585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.125612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.126045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.126385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.126412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.126667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.127000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.127028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.127392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.127681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.127708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.128058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.128432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.128459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.128803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.129170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.129197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.129578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.129916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.129944] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.130306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.130645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.130673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.131066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.131440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.131467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.131859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.132199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.132225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.132627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.132997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.133024] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.133397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.133769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.133798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.134155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.134420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.134448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.134800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.135171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.135198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.135417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.135821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.135848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.136231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.136598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.136624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.136996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.137344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.643 [2024-04-24 20:57:42.137370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.643 qpair failed and we were unable to recover it. 00:26:17.643 [2024-04-24 20:57:42.137740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.138086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.138113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.138480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.138760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.138791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.139178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.139530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.139557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.139936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.140309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.140335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.140680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.141063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.141092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.141469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.141818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.141846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.142287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.142647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.142675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.143114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.143477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.143504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.143754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.144090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.144116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.144505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.144867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.144895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.145260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.145628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.145655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.145934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.146308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.146335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.146692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.147095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.147122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.147512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.147858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.147885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.148264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.148627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.148654] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.149004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.149403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.149430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.149805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.150167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.150195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.150450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.150840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.150868] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.151223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.151572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.151599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.151855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.152217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.152244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.152579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.152969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.152999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.153375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.153700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.153739] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.154101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.154464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.154491] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.154852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.155209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.155236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.155624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.155994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.156021] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.156383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.156711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.156782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.157142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.157523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.157550] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.157826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.158193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.158220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.158585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.158949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.644 [2024-04-24 20:57:42.158978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.644 qpair failed and we were unable to recover it. 00:26:17.644 [2024-04-24 20:57:42.159344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.159689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.159715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.160147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.160556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.160583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.161002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.161368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.161394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.161660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.162016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.162048] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.162425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.162791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.162818] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.163111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.163483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.163510] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.163772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.164171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.164197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.164557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.164965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.164992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.165330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.165690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.165716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.166095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.166471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.166499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.166860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.167222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.167250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.167625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.167992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.168020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.168405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.168763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.168791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.169192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.169428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.169454] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.169885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.170260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.170286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.170672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.171009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.171037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.171395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.171746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.171776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.172144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.172485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.172512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.172879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.173239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.173265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.173524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.173922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.173950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.174320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.174700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.174747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.175141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.175507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.175535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.175900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.176265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.176293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.176673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.177023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.645 [2024-04-24 20:57:42.177051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.645 qpair failed and we were unable to recover it. 00:26:17.645 [2024-04-24 20:57:42.177430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.177626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.177656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.178033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.178435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.178462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.178827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.179214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.179240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.179625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.179953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.179982] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.180352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.180750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.180778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.181132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.181487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.181514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.181892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.182259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.182286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.182663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.183022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.183055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.183402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.183780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.183808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.184213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.184571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.184598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.184962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.185312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.185339] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.185602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.185980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.186008] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.186266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.186609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.186635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.187021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.187294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.187320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.187665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.188097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.188127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.188544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.188835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.188862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.189264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.189608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.189635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.190008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.190344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.190375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.190750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.191084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.191110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.191458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.191793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.191821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.192181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.192517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.192543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.192907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.193339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.193366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.193740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.193996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.194023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.194259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.194634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.194661] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.195063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.195429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.195457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.195815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.196162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.196189] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.196425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.196757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.196785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.197074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.197352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.197389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.646 qpair failed and we were unable to recover it. 00:26:17.646 [2024-04-24 20:57:42.197699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.646 [2024-04-24 20:57:42.198099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.198127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.198477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.198847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.198874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.199240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.199491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.199517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.199870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.200233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.200261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.200638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.201012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.201040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.201393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.201632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.201662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.202041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.202387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.202414] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.202795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.203040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.203070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.203468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.203890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.203918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.204300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.204670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.204696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.205015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.205345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.205371] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.205738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.206095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.206121] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.206496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.206847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.206874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.207242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.207594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.207620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.207995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.208287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.208313] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.208671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.209036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.209065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.209411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.209764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.209792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.210157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.210396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.210426] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.210824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.211207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.211235] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.211612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.211963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.211991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.212379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.212784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.212813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.213158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.213532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.213558] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.213918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.214291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.214316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.214713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.214978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.215005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.215238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.215518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.215544] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.215926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.216303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.216330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.216645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.216982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.217009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.217400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.217745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.217772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.218231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.218569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.218595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.647 qpair failed and we were unable to recover it. 00:26:17.647 [2024-04-24 20:57:42.218969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.647 [2024-04-24 20:57:42.219343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.219370] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.219823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.220217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.220244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.220592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.220960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.220988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.221336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.221714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.221753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.221988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.222388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.222415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.222858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.223202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.223228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.223604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.223878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.223906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.224267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.224608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.224634] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.225009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.225358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.225385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.225756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.226088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.226114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.226549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.226794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.226822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.227190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.227484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.227512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.227890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.228229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.228256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.228630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.228994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.229022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.229369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.229713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.229751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.230112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.230467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.230493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.230865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.231227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.231254] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.231582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.231945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.231973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.232332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.232694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.232722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.232999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.233367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.233395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.233761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.234164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.234192] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.234568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.234919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.234947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.235321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.235667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.235693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.236098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.236475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.236503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.236851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.237204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.237230] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.237606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.237972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.238000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.238372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.238627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.238657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.239030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.239396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.239422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.239798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.240135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.648 [2024-04-24 20:57:42.240161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.648 qpair failed and we were unable to recover it. 00:26:17.648 [2024-04-24 20:57:42.240519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.240870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.240899] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.241272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.241687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.241714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.242110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.242477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.242503] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.242885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.243246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.243272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.243510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.243873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.243901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.244284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.244537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.244564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.244947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.245314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.245340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.245720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.246098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.246125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.246504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.246862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.246889] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.247304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.247658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.247686] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.248072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.248423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.248449] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.248825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.249202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.249228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.249604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.249978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.250007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.250371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.250605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.250631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.251006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.251346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.251372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.251750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.251976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.252005] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.252273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.252621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.252648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.253029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.253393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.253420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.253790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.254171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.254198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.254567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.254999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.255027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.255408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.255770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.255799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.256205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.256555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.256582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.256962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.257306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.257333] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.257704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.258068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.258098] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.258446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.258892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.258919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 [2024-04-24 20:57:42.259308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.259705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.259754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2935546 Killed "${NVMF_APP[@]}" "$@" 00:26:17.649 [2024-04-24 20:57:42.260131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.260464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.260490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 20:57:42 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:26:17.649 [2024-04-24 20:57:42.260751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 20:57:42 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:17.649 [2024-04-24 20:57:42.261140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.649 [2024-04-24 20:57:42.261167] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.649 qpair failed and we were unable to recover it. 00:26:17.649 20:57:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:17.649 [2024-04-24 20:57:42.261537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 20:57:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:17.650 20:57:42 -- common/autotest_common.sh@10 -- # set +x 00:26:17.650 [2024-04-24 20:57:42.261919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.261947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.650 qpair failed and we were unable to recover it. 00:26:17.650 [2024-04-24 20:57:42.262325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.262680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.262707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.650 qpair failed and we were unable to recover it. 00:26:17.650 [2024-04-24 20:57:42.263106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.263477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.263506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.650 qpair failed and we were unable to recover it. 00:26:17.650 [2024-04-24 20:57:42.263880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.264236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.264264] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.650 qpair failed and we were unable to recover it. 00:26:17.650 [2024-04-24 20:57:42.264644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.265082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.265110] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.650 qpair failed and we were unable to recover it. 00:26:17.650 [2024-04-24 20:57:42.265469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.265793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.265822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.650 qpair failed and we were unable to recover it. 00:26:17.650 [2024-04-24 20:57:42.266284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.266608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.266636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.650 qpair failed and we were unable to recover it. 00:26:17.650 [2024-04-24 20:57:42.266979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.267345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.267374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.650 qpair failed and we were unable to recover it. 00:26:17.650 [2024-04-24 20:57:42.267783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.268165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.650 [2024-04-24 20:57:42.268191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.650 qpair failed and we were unable to recover it. 00:26:17.650 [2024-04-24 20:57:42.268446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.268806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.268836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.921 qpair failed and we were unable to recover it. 00:26:17.921 [2024-04-24 20:57:42.269221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.269431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.269465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.921 qpair failed and we were unable to recover it. 00:26:17.921 20:57:42 -- nvmf/common.sh@470 -- # nvmfpid=2936405 00:26:17.921 [2024-04-24 20:57:42.269873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 20:57:42 -- nvmf/common.sh@471 -- # waitforlisten 2936405 00:26:17.921 [2024-04-24 20:57:42.270213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.270244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.921 qpair failed and we were unable to recover it. 00:26:17.921 20:57:42 -- common/autotest_common.sh@817 -- # '[' -z 2936405 ']' 00:26:17.921 20:57:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:17.921 [2024-04-24 20:57:42.270484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 20:57:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.921 [2024-04-24 20:57:42.270763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.270794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.921 qpair failed and we were unable to recover it. 00:26:17.921 20:57:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:17.921 20:57:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.921 [2024-04-24 20:57:42.271141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 20:57:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:17.921 20:57:42 -- common/autotest_common.sh@10 -- # set +x 00:26:17.921 [2024-04-24 20:57:42.271485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.271514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.921 qpair failed and we were unable to recover it. 00:26:17.921 [2024-04-24 20:57:42.271876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.272231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.272261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.921 qpair failed and we were unable to recover it. 00:26:17.921 [2024-04-24 20:57:42.272528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.272899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.272930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.921 qpair failed and we were unable to recover it. 00:26:17.921 [2024-04-24 20:57:42.273300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.273668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.273697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.921 qpair failed and we were unable to recover it. 00:26:17.921 [2024-04-24 20:57:42.273985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.274363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.921 [2024-04-24 20:57:42.274392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.921 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.274814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.275097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.275127] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.275364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.275779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.275810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.276213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.276567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.276598] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.276866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.277271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.277300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.277676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.277937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.277969] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.278319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.278675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.278706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.279132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.279497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.279527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.279787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.280183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.280215] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.280567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.280829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.280862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.281265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.281543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.281573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.281921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.282341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.282372] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.282757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.283178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.283207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.283573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.283815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.283848] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.284230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.284637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.284667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.285048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.285452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.285482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.285855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.286269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.286298] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.286570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.286961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.286991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.287360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.287751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.287781] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.287982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.288198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.288231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.288503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.288842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.288872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.289237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.289478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.289506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.289892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.291815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.291877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.292283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.292654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.292682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.292976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.293325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.293354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.293699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.294165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.294195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.294563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.294926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.294957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.295334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.295755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.295785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.295933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.296177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.922 [2024-04-24 20:57:42.296208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.922 qpair failed and we were unable to recover it. 00:26:17.922 [2024-04-24 20:57:42.296593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.296973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.297006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.297255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.297655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.297684] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.298041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.298282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.298310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.298678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.299038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.299068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.299319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.299646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.299675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.300101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.300508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.300539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.300898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.301270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.301299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.301525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.301858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.301888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.302243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.302652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.302682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.303046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.303287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.303315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.303765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.304028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.304055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.304441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.304807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.304839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.305102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.305476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.305504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.305764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.306126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.306154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.306512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.306896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.306927] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.307289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.307621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.307650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.308000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.308401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.308431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.308690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.309081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.309111] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.309483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.309857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.309888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.310146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.310517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.310546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.310822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.311204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.311233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.311478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.311749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.311782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.312155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.312381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.312410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.312770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.313048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.313079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.313450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.313812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.313843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.314235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.314484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.314511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.314889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.315229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.315259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.315626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.316033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.316063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.923 [2024-04-24 20:57:42.316451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.316887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.923 [2024-04-24 20:57:42.316916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.923 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.317314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.317684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.317712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.318085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.318456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.318486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.318852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.319209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.319239] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.319603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.319966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.319995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.320366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.320744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.320774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.321146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.321507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.321536] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.321917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.322285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.322322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.322701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.323112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.323141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.323520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.323866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.323897] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.324259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.324621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.324651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.324995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.325294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.325323] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.325702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.326093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.326123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.326487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.326885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.326916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.327277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.327642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.327672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.327695] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:26:17.924 [2024-04-24 20:57:42.327758] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.924 [2024-04-24 20:57:42.328049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.328414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.328439] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.328844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.329219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.329248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.329627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.329991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.330022] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.330387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.330754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.330785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.331186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.331552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.331585] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.331876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.332106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.332137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.332392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.332723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.332779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.333158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.333544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.333574] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.333953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.334310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.334340] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.334721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.334987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.335017] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.335354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.335719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.335762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.335995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.336258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.336288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.336553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.336795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.336826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.924 qpair failed and we were unable to recover it. 00:26:17.924 [2024-04-24 20:57:42.337217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.337579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.924 [2024-04-24 20:57:42.337607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.337998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.338360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.338390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.338767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.339167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.339197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.339591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.339929] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.339959] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.340215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.340583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.340612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.340998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.341314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.341342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.341571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.341837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.341865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.342280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.342642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.342672] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.343022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.343283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.343311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.343696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.344072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.344103] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.344488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.344862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.344893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.345243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.345496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.345524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.345911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.346272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.346301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.346710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.346974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.347003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.347368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.347742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.347772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.348144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.348518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.348546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.348891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.349276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.349304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.349676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.350046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.350077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.350491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.350881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.350912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.351317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.351669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.351697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.352076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.352467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.352496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.352866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.353238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.353266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.353642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.354005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.354035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.354417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.354755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.354786] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.355191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.355550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.355580] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.355909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.356271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.356301] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.356712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.357071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.357101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.357509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.357747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.357776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.358171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.358544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.358573] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.925 qpair failed and we were unable to recover it. 00:26:17.925 [2024-04-24 20:57:42.358923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.925 [2024-04-24 20:57:42.359348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.359377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.359628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.360005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.360035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.360415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.360648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.360675] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.360813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.361182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.361212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.361592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.361886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.361916] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.362213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.362560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.362589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.362963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.363327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.363356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.363745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.926 [2024-04-24 20:57:42.364142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.364172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.364541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.364786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.364815] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.365177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.365409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.365437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.365773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.366179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.366208] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.366466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.366706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.366761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.367138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.367491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.367520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.367864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.368100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.368130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.368469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.368815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.368845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.369129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.369487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.369516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.369886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.370150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.370179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.370526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.370762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.370792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.371152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.371513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.371543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.371889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.372280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.372310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.372573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.372910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.372940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.373325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.373685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.373714] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.926 [2024-04-24 20:57:42.374030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.374392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.926 [2024-04-24 20:57:42.374422] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.926 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.374770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.375157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.375186] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.375566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.375920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.375952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.376322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.376544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.376572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.376914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.377166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.377193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.377536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.377902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.377932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.378241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.378450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.378481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.378825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.379188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.379217] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.379602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.379828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.379858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.380222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.380463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.380490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.380723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.380985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.381012] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.381395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.381759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.381789] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.382124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.382485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.382514] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.382900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.383237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.383267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.383623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.383949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.383981] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.384193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.384511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.384539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.384780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.385008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.385038] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.385290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.385622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.385652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.386020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.386368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.386397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.386766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.387131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.387161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.387539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.387942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.387973] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.388360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.388596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.388623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.388994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.389249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.389276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.389642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.390046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.390076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.390408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.390818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.390846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.391189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.391523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.391554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.391916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.392166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.392195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.392445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.392796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.392827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.393232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.393595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.393624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.927 [2024-04-24 20:57:42.393978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.394346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.927 [2024-04-24 20:57:42.394375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.927 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.394753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.395109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.395139] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.395386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.395635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.395665] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.395928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.396309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.396337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.396604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.396961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.396992] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.397233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.397556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.397586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.397836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.398216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.398245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.398576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.398918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.398947] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.399294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.399668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.399699] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.399909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.400258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.400289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.400659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.401065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.401096] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.401474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.401839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.401870] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.402228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.402584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.402613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.402990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.403380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.403410] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.403789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.404154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.404182] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.404563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.404917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.404946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.405287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.405659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.405689] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.405945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.406312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.406342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.406705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.406994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.407023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.407366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.407754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.407785] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.408153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.408514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.408543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.408981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.409355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.409384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.409796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.410173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.410202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.410625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.410980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.411011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.411361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.411592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.411621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.411966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.412227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.412256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.412501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.412762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.412791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.413138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.413459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.413488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.928 [2024-04-24 20:57:42.413875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.414102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.928 [2024-04-24 20:57:42.414130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.928 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.414493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.414856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.414887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.415253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.415493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.415524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.415781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.416150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.416179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.416431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.416787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.416817] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.417196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.417401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.417429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.417796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.418156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.418184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.418562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.418955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.418986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.419054] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:17.929 [2024-04-24 20:57:42.419365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.419563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.419594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.419925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.420287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.420316] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.420542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.420860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.420890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.421277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.421683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.421712] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.422019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.422387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.422415] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.422791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.423036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.423066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.423440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.423794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.423824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.424195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.424560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.424589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.424963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.425204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.425232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.425485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.425861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.425891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.426108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.426505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.426535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.426906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.427273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.427303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.427700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.428049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.428079] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.428331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.428741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.428772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.429142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.429510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.429539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.430003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.430400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.430431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.430811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.431181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.431213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.431452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.431781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.431812] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.432179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.432425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.432453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.432850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.433144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.433175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.433575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.433878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.433908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.929 qpair failed and we were unable to recover it. 00:26:17.929 [2024-04-24 20:57:42.434291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.929 [2024-04-24 20:57:42.434666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.434695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.435099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.435334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.435362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.435756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.436180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.436212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.436596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.437581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.437626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.437999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.438282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.438314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.438682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.439033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.439063] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.439471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.439845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.439877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.440257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.440616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.440645] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.441027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.441249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.441277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.441640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.442003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.442035] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.442340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.442709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.442751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.442990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.443308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.443337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.443590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.443946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.443977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.444372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.444746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.444777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.445129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.445331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.445359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.446934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.447411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.447448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.447859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.448232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.448263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.448641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.449000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.449031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.449409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.449775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.449805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.450204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.450564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.450593] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.450979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.451341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.451369] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.451752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.452132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.452161] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.452535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.452891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.452929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.930 qpair failed and we were unable to recover it. 00:26:17.930 [2024-04-24 20:57:42.453323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.453692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.930 [2024-04-24 20:57:42.453721] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.454095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.454467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.454497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.454879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.455244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.455274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.455658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.456004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.456037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.456411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.456766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.456796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.457044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.457414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.457443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.457701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.458083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.458114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.458497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.458821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.458853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.459229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.459596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.459625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.460072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.460436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.460471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.460707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.461045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.461075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.461437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.461698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.461746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.462106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.462469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.462498] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.462869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.463248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.463277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.463646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.463874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.463904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.464248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.464621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.464650] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.464997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.465243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.465273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.465542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.465919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.465949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.466317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.466551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.466579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.466933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.467297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.467332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.467710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.468031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.468064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.468436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.468680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.468718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.469131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.469388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.469418] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.469766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.470150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.470179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.470565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.470922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.470955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.471331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.471691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.471722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.471996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.472382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.472412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.472786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.473172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.473201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.473541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.473944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.473976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.931 qpair failed and we were unable to recover it. 00:26:17.931 [2024-04-24 20:57:42.474349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.474640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.931 [2024-04-24 20:57:42.474677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.474990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.475382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.475412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.475793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.476166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.476194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.476624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.477006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.477037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.477413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.477811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.477842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.478227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.478583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.478613] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.478972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.479336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.479365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.479624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.479878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.479908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.480291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.480659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.480688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.481093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.481464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.481494] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.481885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.482251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.482280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.482697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.483115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.483145] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.483559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.483917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.483948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.484325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.484690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.484720] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.485119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.485521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.485549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.485907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.486162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.486190] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.486566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.486965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.486995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.487361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.487750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.487783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.488188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.488548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.488578] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.488959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.489368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.489397] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.489652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.490017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.490047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.490425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.490619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.490648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.490998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.491247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.491279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.491648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.492019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.492049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.492416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.492765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.492797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.493174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.493535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.493565] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.493917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.494315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.494344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.494746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.495123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.495151] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.495499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.495864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.495895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.932 qpair failed and we were unable to recover it. 00:26:17.932 [2024-04-24 20:57:42.496278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.932 [2024-04-24 20:57:42.496592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.496623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.496993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.497355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.497384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.497758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.498219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.498248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.498460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.498781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.498810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.499083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.499445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.499475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.499821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.500190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.500220] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.500558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.500918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.500948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.501320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.501668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.501697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.501938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.502263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.502293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.502595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.502931] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.502960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.503339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.503695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.503735] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.504130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.504531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.504559] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.504939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.505300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.505329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.505717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.506109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.506140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.506371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.506767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.506798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.507162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.507509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.507538] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.507901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.508269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.508299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.508665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.509047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.509078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.509483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.509817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.509846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.510213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.510568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.510597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.510806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.511037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.511064] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.511413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.511796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.511827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.512237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.512598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.512626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.512984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.513390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.513419] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.513795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.514041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.514073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.514276] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.933 [2024-04-24 20:57:42.514327] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.933 [2024-04-24 20:57:42.514336] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.933 [2024-04-24 20:57:42.514342] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.933 [2024-04-24 20:57:42.514348] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.933 [2024-04-24 20:57:42.514426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.514514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:17.933 [2024-04-24 20:57:42.514673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:17.933 [2024-04-24 20:57:42.514814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.514836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:17.933 [2024-04-24 20:57:42.514847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.933 qpair failed and we were unable to recover it. 00:26:17.933 [2024-04-24 20:57:42.514836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:17.933 [2024-04-24 20:57:42.515241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.515609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.933 [2024-04-24 20:57:42.515637] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.515906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.516270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.516300] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.516662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.517048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.517077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.517318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.517567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.517601] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.517977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.518234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.518263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.518629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.518996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.519026] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.519397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.519760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.519790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.520178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.520551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.520579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.520932] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.521313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.521343] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.521574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.521927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.521957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.522315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.522690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.522718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.522983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.523307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.523336] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.523766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.524171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.524202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.524588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.525146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.525184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.525552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.525922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.525954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.526384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.526760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.526791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.527076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.527483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.527512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.527790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.528167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.528196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.528425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.528690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.528719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.529153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.529407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.529435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.529824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.530063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.530094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.530326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.530690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.530718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.530986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.531359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.531389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.531766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.532170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.532199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.532431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.532796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.532826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.533197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.533344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.533374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.533625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.533911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.533940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.534170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.534528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.534557] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.534938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.535177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.535209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.934 qpair failed and we were unable to recover it. 00:26:17.934 [2024-04-24 20:57:42.535347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.934 [2024-04-24 20:57:42.535751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.535782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.536254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.536599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.536629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.537004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.537215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.537244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.537593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.537832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.537862] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.538204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.538566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.538596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.538975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.539297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.539325] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.539570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.539925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.539955] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.540335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.540701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.540744] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.541119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.541483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.541513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.541895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.542141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.542171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.542393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.542719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.542760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.543103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.543323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.543352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.543612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.543869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.543904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.544156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.544381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.544412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.544823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.545197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.545227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.545364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.545592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.545619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.546000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.546352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.546382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.546622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.546891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.546923] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.547154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.547514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.547543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.547917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.548140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.548169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.548397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.548780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.935 [2024-04-24 20:57:42.548811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.935 qpair failed and we were unable to recover it. 00:26:17.935 [2024-04-24 20:57:42.549224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.936 [2024-04-24 20:57:42.549573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.936 [2024-04-24 20:57:42.549600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.936 qpair failed and we were unable to recover it. 00:26:17.936 [2024-04-24 20:57:42.549968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.936 [2024-04-24 20:57:42.550218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.936 [2024-04-24 20:57:42.550247] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.936 qpair failed and we were unable to recover it. 00:26:17.936 [2024-04-24 20:57:42.550509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.936 [2024-04-24 20:57:42.550876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.936 [2024-04-24 20:57:42.550907] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.936 qpair failed and we were unable to recover it. 00:26:17.936 [2024-04-24 20:57:42.551285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.936 [2024-04-24 20:57:42.551652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.936 [2024-04-24 20:57:42.551683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.936 qpair failed and we were unable to recover it. 00:26:17.936 [2024-04-24 20:57:42.552093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.936 [2024-04-24 20:57:42.552470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.936 [2024-04-24 20:57:42.552500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.936 qpair failed and we were unable to recover it. 00:26:17.936 [2024-04-24 20:57:42.552738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.936 [2024-04-24 20:57:42.553077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:17.936 [2024-04-24 20:57:42.553106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:17.936 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.553487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.553854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.553887] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.554255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.554496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.554525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.554900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.555275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.555305] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.555562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.555927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.555958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.556365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.556745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.556776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.557146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.557524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.557554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.557976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.558191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.558219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.558464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.558843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.558874] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.559262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.559636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.559666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.560037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.560274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.560307] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.560539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.560973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.561004] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.561390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.561809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.561839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.562233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.562595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.562623] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.562995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.563360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.563389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.563768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.564144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.564172] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.564528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.564760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.564787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.565173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.565568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.565597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.565846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.566109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.566141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.566510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.566918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.566949] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.567314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.567688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.567717] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.568085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.568449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.568478] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.568706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.568872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.568900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.569231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.569635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.569663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.569883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.570251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.570281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.211 [2024-04-24 20:57:42.570664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.571040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.211 [2024-04-24 20:57:42.571069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.211 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.571437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.571647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.571673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.572076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.572436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.572465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.572682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.573048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.573080] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.573292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.573535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.573564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.573948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.574175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.574203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.574542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.574905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.574935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.575163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.575541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.575571] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.575835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.576199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.576228] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.576594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.576826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.576855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.577142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.577551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.577581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.578051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.578420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.578451] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.578828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.579168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.579199] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.579479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.579835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.579865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.580259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.580618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.580648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.581088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.581338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.581366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.581751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.581960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.581988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.582363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.582724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.582766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.583132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.583541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.583569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.583952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.584320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.584349] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.584720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.585078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.585107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.212 qpair failed and we were unable to recover it. 00:26:18.212 [2024-04-24 20:57:42.585485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.212 [2024-04-24 20:57:42.585848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.585878] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.586238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.586565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.586594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.586967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.587211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.587240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.587622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.587977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.588009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.588223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.588467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.588495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.588850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.589050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.589078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.589221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.589458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.589486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.589589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.589776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.589805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.590176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.590405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.590433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.590792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.591151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.591179] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.591555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.591796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.591827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.592194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.592423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.592453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.592705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.593076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.593107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.593472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.593838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.593869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.594241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.594599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.594627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.595009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.595252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.595283] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.595545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.595896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.595925] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.596286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.596645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.596674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.597051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.597432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.597461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.597592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.597930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.597961] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.598329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.598693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.598722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.599139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.599340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.599368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.599748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.600094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.600123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.600386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.600591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.600621] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.601006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.601370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.601399] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.601769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.602135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.602163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.602534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.602866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.602896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.213 qpair failed and we were unable to recover it. 00:26:18.213 [2024-04-24 20:57:42.603234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.213 [2024-04-24 20:57:42.603585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.603614] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.603983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.604337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.604365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.604748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.605089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.605118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.605469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.605679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.605707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.606106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.606338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.606368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.606758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.607130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.607159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.607530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.607899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.607934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.608268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.608656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.608685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.609068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.609445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.609474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.609862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.610082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.610112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.610468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.610813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.610842] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.611228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.611557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.611588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.611945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.612323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.612352] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.612713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.613088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.613119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.613480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.613811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.613841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.614206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.614592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.614620] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.614987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.615247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.615281] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.615633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.615872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.615901] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.616275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.616638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.616667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.617045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.617405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.617434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.617815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.618189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.618218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.618454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.618823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.618853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.619089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.619297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.619326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.619635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.619979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.620009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.620428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.620767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.620797] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.621216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.621589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.621618] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.621849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.622184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.622218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.622549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.622781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.622813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.623033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.623280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.214 [2024-04-24 20:57:42.623309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.214 qpair failed and we were unable to recover it. 00:26:18.214 [2024-04-24 20:57:42.623672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.624042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.624073] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.624439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.624799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.624829] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.625207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.625571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.625599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.625999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.626361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.626389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.626599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.627012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.627043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.627420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.627794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.627824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.628206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.628569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.628597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.628996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.629233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.629266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.629667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.630059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.630088] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.630462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.630824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.630855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.631260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.631618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.631647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.632030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.632391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.632420] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.632794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.633169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.633198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.633418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.633808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.633839] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.634224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.634578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.634607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.634705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.634967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.634996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.635236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.635599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.635629] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.635985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.636215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.636246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.636496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.636854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.636884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.637286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.637658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.637687] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.638039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.638432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.638462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.638815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.639188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.639219] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.639472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.639924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.639954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.640350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.640713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.640754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.641165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.641531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.641560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.641907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.642300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.642330] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.215 qpair failed and we were unable to recover it. 00:26:18.215 [2024-04-24 20:57:42.642560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.215 [2024-04-24 20:57:42.642782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.642813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.643213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.643555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.643584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.643966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.644330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.644359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.644606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.644964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.644995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.645372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.645745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.645775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.646027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.646408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.646437] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.646790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.647202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.647231] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.647596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.647827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.647858] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.648254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.648491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.648527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.648926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.649292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.649321] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.649690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.650039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.650071] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.650514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.650762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.650790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.651178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.651421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.651450] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.651831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.652176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.652205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.652593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.652957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.652988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.653373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.653746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.653776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.653974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.654209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.654240] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.654602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.654959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.654988] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.655230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.655471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.655500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.655879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.656258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.656288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.656669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.657045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.657074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.657453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.657824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.657855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.658272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.658647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.658676] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.659031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.659387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.659416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.659799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.660175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.660203] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.216 qpair failed and we were unable to recover it. 00:26:18.216 [2024-04-24 20:57:42.660596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.216 [2024-04-24 20:57:42.660785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.660813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.661184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.661552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.661581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.661963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.662336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.662365] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.662614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.662982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.663011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.663380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.663746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.663777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.664145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.664506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.664535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.664905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.665282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.665310] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.665664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.666033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.666062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.666295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.666652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.666681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.666937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.667318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.667347] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.667717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.667983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.668011] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.668425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.668665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.668693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.669063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.669414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.669443] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.669813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.670187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.670216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.670586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.670805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.670834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.671182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.671554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.671583] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.671964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.672210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.672243] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.672613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.673020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.673050] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.673297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.673653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.673682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.674046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.674432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.674462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.674687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.675062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.675092] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.675461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.675830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.675860] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.676241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.676602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.676631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.676849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.677131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.677159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.677528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.677746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.677774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.678130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.678352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.678380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.217 qpair failed and we were unable to recover it. 00:26:18.217 [2024-04-24 20:57:42.678613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.217 [2024-04-24 20:57:42.678934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.678965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.679209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.679562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.679591] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.679959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.680352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.680382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.680645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.681003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.681033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.681246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.681489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.681518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.681919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.682128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.682159] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.682375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.682598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.682627] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.682999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.683367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.683395] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.683763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.683977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.684007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.684377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.684745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.684775] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.685122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.685429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.685456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.685834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.686206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.686234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.686551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.686918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.686948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.687305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.687670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.687700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.688093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.688456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.688486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.688707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.688933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.688963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.689358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.689558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.689587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.689805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.690063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.690094] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.690317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.690642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.690673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.691080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.691446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.691475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.691848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.692220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.692249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.692629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.692997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.693028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.693399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.693779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.693811] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.694190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.694553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.694581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.694860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.695072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.695100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.695361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.695459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.695488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.695758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.696013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.696042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.696419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.696783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.696813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.218 [2024-04-24 20:57:42.697051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.697433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.218 [2024-04-24 20:57:42.697462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.218 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.697850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.698127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.698154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.698483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.698739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.698770] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.699132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.699517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.699546] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.699892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.700112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.700143] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.700522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.700822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.700853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.701118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.701467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.701497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.701877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.702205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.702234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.702601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.702968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.702999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.703379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.703792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.703822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.704213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.704559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.704590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.704980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.705082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.705112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.705462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.705781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.705813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.706201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.706587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.706617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.707007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.707428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.707457] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.707693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.708106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.708137] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.708501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.708749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.708777] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.709160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.709532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.709561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.709942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.710036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.710061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.710420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.710665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.710695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.710964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.711351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.711380] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.711761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.711852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.711880] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.712229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.712598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.712626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.713000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.713218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.713246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.713502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.713869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.713900] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.714152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.714489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.714519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.714908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.715113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.715140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.715523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.715775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.715803] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.716205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.716572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.716602] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.219 qpair failed and we were unable to recover it. 00:26:18.219 [2024-04-24 20:57:42.716988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.717360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.219 [2024-04-24 20:57:42.717389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.717584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.717923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.717953] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.718168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.718553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.718582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.718967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.719328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.719358] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.719750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.720019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.720056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.720435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.720798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.720828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.721206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.721575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.721603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.721994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.722349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.722378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.722759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.723016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.723045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.723280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.723494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.723523] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.723887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.724104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.724132] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.724382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.724752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.724783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.725038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.725411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.725440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.725659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.725997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.726027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.726254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.726650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.726685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.727063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.727282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.727309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.727542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.727915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.727946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.728315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.728679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.728708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.729089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.729352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.729379] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.220 qpair failed and we were unable to recover it. 00:26:18.220 [2024-04-24 20:57:42.729583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.729915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.220 [2024-04-24 20:57:42.729946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.730276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.730621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.730651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.730933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.731151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.731183] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.731558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.731915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.731945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.732315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.732679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.732708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.733118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.733493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.733529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.733874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.734257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.734286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.734657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.735003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.735034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.735429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.735645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.735673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.736079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.736453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.736481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.736874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.737242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.737271] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.737642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.737855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.737886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.738258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.738622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.738651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.738999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.739365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.739393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.739775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.740162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.740191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.740566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.740917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.740957] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.741184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.741560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.741589] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.741872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.742281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.742311] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.742570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.742935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.742965] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.743347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.743560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.743587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.744001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.744231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.744258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.744661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.745036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.745065] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.745443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.745671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.745698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.221 qpair failed and we were unable to recover it. 00:26:18.221 [2024-04-24 20:57:42.746067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.221 [2024-04-24 20:57:42.746433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.746462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.746833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.747133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.747163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.747546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.747914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.747946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.748304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.748689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.748719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.749170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.749383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.749412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.749672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.750044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.750074] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.750310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.750658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.750688] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.751089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.751458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.751488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.751866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.752253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.752282] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.752652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.753015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.753045] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.753425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.753798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.753828] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.754224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.754605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.754633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.755008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.755222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.755250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.755602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.755966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.755997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.756339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.756704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.756747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.757122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.757350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.757377] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.757575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.757948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.757978] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.758207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.758597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.758626] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.758981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.759361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.759390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.759651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.760031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.760062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.760284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.760593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.760622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.761008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.761349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.761378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.761758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.762171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.762201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.762455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.762771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.762804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.763175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.763537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.763566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.763952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.764349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.764378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.764763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.765010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.765041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.222 qpair failed and we were unable to recover it. 00:26:18.222 [2024-04-24 20:57:42.765391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.222 [2024-04-24 20:57:42.765612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.765640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.766065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.766397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.766425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.766662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.766902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.766932] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.767321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.767524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.767552] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.767898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.768131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.768160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.768423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.768667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.768698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.769112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.769483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.769512] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.769865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.770075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.770105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.770475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.770837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.770867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.771252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.771621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.771651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.772023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.772386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.772417] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.772799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.773173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.773202] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.773440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.773815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.773845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.774218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.774588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.774617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.774883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.775245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.775274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.775492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.775909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.775940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.776210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.776439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.776468] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.776819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.777204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.777232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.777592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.777937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.777967] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.778370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.778770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.778802] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.779045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.779293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.779322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.779570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.779804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.779838] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.780222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.780581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.780610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.780967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.781176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.781205] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.781558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.781907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.781937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.782311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.782528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.782555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.782720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.783149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.783178] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.223 qpair failed and we were unable to recover it. 00:26:18.223 [2024-04-24 20:57:42.783526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.223 [2024-04-24 20:57:42.783925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.783956] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.784351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.784713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.784754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.785012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.785419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.785448] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.785767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.785969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.785997] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.786350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.786759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.786790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.787144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.787387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.787416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.787792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.787886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.787912] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.788260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.788499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.788527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.788909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.789178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.789206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.789576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.789993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.790023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.790368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.790580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.790609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.790962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.791346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.791376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.791771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.792049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.792076] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.792472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.792657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.792685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.793053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.793405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.793433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.793667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.793971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.794002] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.794418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.794783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.794814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.795093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.795316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.795345] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.795712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.796126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.796156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.796571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.796917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.796948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.797342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.797582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.797611] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.797969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.798334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.798362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.798613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.798821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.798851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.799232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.799600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.799630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.799861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.800191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.800221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.800469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.800845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.800876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.801247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.801466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.801493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.801804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.802138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.802168] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.802427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.802816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.224 [2024-04-24 20:57:42.802845] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.224 qpair failed and we were unable to recover it. 00:26:18.224 [2024-04-24 20:57:42.803298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.803663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.803693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.803981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.804351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.804381] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.804759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.805006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.805034] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.805399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.805622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.805652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.805878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.806241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.806270] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.806644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.807011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.807041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.807420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.807648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.807677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.807992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.808367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.808396] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.808614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.808969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.809000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.809395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.809609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.809636] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.809930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.810299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.810328] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.810621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.810995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.811025] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.811411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.811613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.811642] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.811908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.812305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.812337] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.812705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.813082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.813114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.813516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.813779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.813809] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.814058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.814307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.814338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.814759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.814961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.814991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.815380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.815642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.815673] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.815986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.816238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.816266] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.816661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.817039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.817072] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.817294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.817653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.817685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.817940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.818312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.225 [2024-04-24 20:57:42.818344] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.225 qpair failed and we were unable to recover it. 00:26:18.225 [2024-04-24 20:57:42.818562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.818933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.818964] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.819322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.819686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.819716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.820072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.820437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.820467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.820839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.821176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.821207] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.821554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.821785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.821814] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.822209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.822574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.822603] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.822983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.823357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.823385] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.823757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.824134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.824164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.824538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.824768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.824796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.825196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.825403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.825433] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.825807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.826181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.826210] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.826627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.827024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.827055] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.827440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.827537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.827561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.827954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.828333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.828362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.828718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.829101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.829130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.829381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.829762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.829792] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.830163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.830534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.830563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.830941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.831305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.831341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.831559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.831928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.831958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.832326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.832696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.832734] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.833097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.833464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.833492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.833697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.834074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.834104] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.834471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.834840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.226 [2024-04-24 20:57:42.834871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.226 qpair failed and we were unable to recover it. 00:26:18.226 [2024-04-24 20:57:42.835128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.227 [2024-04-24 20:57:42.835496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.227 [2024-04-24 20:57:42.835525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.227 qpair failed and we were unable to recover it. 00:26:18.227 [2024-04-24 20:57:42.835756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.227 [2024-04-24 20:57:42.836111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.227 [2024-04-24 20:57:42.836141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.227 qpair failed and we were unable to recover it. 00:26:18.227 [2024-04-24 20:57:42.836518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.227 [2024-04-24 20:57:42.836753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.227 [2024-04-24 20:57:42.836784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.227 qpair failed and we were unable to recover it. 00:26:18.227 [2024-04-24 20:57:42.837171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.227 [2024-04-24 20:57:42.837445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.227 [2024-04-24 20:57:42.837474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.227 qpair failed and we were unable to recover it. 00:26:18.227 [2024-04-24 20:57:42.837841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.227 [2024-04-24 20:57:42.838214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.227 [2024-04-24 20:57:42.838249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.227 qpair failed and we were unable to recover it. 00:26:18.227 [2024-04-24 20:57:42.838627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.227 [2024-04-24 20:57:42.839029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.227 [2024-04-24 20:57:42.839060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.227 qpair failed and we were unable to recover it. 00:26:18.519 [2024-04-24 20:57:42.839425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.839667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.839697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.519 qpair failed and we were unable to recover it. 00:26:18.519 [2024-04-24 20:57:42.839950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.840334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.840364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.519 qpair failed and we were unable to recover it. 00:26:18.519 [2024-04-24 20:57:42.840775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.840927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.840954] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.519 qpair failed and we were unable to recover it. 00:26:18.519 [2024-04-24 20:57:42.841182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.841531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.841560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.519 qpair failed and we were unable to recover it. 00:26:18.519 [2024-04-24 20:57:42.841964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.842324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.842353] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.519 qpair failed and we were unable to recover it. 00:26:18.519 [2024-04-24 20:57:42.842718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.843102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.843134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.519 qpair failed and we were unable to recover it. 00:26:18.519 [2024-04-24 20:57:42.843285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.843620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.843649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.519 qpair failed and we were unable to recover it. 00:26:18.519 [2024-04-24 20:57:42.844023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.844395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.844424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.519 qpair failed and we were unable to recover it. 00:26:18.519 [2024-04-24 20:57:42.844809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.845174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.519 [2024-04-24 20:57:42.845221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.519 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.845572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.845920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.845950] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.846334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.846705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.846745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.847086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.847457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.847486] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.847875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.848251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.848280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.848648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.849018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.849047] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.849416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.849748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.849779] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.850180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.850535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.850564] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.850792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.851192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.851221] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.851575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.851960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.851991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.852359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.852763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.852800] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.853138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.853446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.853474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.853704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.854084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.854113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.854480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.854841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.854871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.855279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.855687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.855716] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.855956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.856333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.856362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.856748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.857118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.857148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.857527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.857937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.857966] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.858339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.858706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.858747] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.859007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.859360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.859387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.859766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.860014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.860042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.860451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.860804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.860834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.861079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.861486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.861515] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.861901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.862121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.862148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.862517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.862759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.862788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.863153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.863366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.863393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.863762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.864166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.864196] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.864571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.864817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.864847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.865235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.865489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.865517] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.520 qpair failed and we were unable to recover it. 00:26:18.520 [2024-04-24 20:57:42.865891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.520 [2024-04-24 20:57:42.866102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.866129] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.866495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.866860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.866890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.867276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.867580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.867610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.867871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.868283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.868312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.868556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.868964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.868994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.869317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.869552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.869582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.869959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.870313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.870342] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.870694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.870946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.870977] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.871356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.871738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.871768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.872130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.872492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.872521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.872955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.873202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.873234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.873626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.873996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.874029] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.874401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.874801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.874831] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.875049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.875361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.875389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.875648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.876012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.876041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.876415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.876773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.876804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.877182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.877553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.877582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.877922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.878288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.878317] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.878699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.879104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.879134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.879353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.879719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.879760] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.880006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.880393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.880421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.880649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.881054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.881085] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.881466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.881840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.881871] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.882119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.882475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.882508] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.882878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.883104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.883134] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.883512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.883879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.883908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.884059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.884432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.884461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.884841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.885250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.885279] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.885631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.886013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.521 [2024-04-24 20:57:42.886043] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.521 qpair failed and we were unable to recover it. 00:26:18.521 [2024-04-24 20:57:42.886410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.886774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.886804] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.887033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.887401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.887429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.887786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.888160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.888187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.888408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.888797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.888826] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.889177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.889431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.889459] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.889724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.890085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.890113] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.890491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.890623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.890652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.891078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.891441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.891469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.891835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.892079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.892109] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.892359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.892738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.892768] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.893191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.893394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.893421] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.893793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.894168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.894197] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.894579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.894790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.894820] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.895209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.895442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.895474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.895813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.896058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.896086] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.896343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.896710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.896751] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.897129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.897491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.897520] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.897781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.898016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.898044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.898264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.898633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.898662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.899026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.899382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.899412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.899764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.900125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.900155] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.900535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.900905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.900934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.901166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.901396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.901425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.901814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.902198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.902227] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.902591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.902964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.902995] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.903381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.903589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.903617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.903988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.904358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.904387] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.904483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.904805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.904836] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.522 qpair failed and we were unable to recover it. 00:26:18.522 [2024-04-24 20:57:42.905259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.905620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.522 [2024-04-24 20:57:42.905649] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.905876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.906088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.906118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.906500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.906907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.906937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.907160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.907552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.907581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.907968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.908185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.908212] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.908608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.908967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.908996] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.909374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.909780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.909810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.910196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.910575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.910604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.910830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.911063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.911093] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.911528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.911900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.911930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.912309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.912673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.912702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.912958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.913345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.913375] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.913758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.914128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.914157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.914539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.914875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.914905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.915300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.915665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.915694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.915805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.916215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.916245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.916611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.916953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.916985] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.917366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.917739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.917769] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.918112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.918475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.918505] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.918875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.919231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.919259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.919479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.919846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.919876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.920250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.920611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.920639] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.921028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.921214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.921244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.921624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.921940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.921970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.922335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.922580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.922608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.922962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.923332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.923362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.923577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.923873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.923904] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.924274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.924492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.924519] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.924790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.925189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.523 [2024-04-24 20:57:42.925218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.523 qpair failed and we were unable to recover it. 00:26:18.523 [2024-04-24 20:57:42.925595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.925944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.925975] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.926351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.926713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.926755] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.927120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.927470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.927499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.927952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.928345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.928374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.928604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.928960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.928989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.929233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.929563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.929592] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.929999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.930406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.930435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.930806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.931185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.931214] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.931610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.931980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.932010] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.932367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.932740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.932771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.933112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.933354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.933384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.933756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.934129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.934157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.934567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.934899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.934929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.935291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.935655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.935683] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.936079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.936451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.936480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.936859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.937230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.937258] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.937605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.937980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.938016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.938383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.938743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.938772] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.939137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.939501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.939529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.939884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.940282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.940312] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.940549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.940910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.940940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.941356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.941722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.941762] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.524 qpair failed and we were unable to recover it. 00:26:18.524 [2024-04-24 20:57:42.942133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.942340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.524 [2024-04-24 20:57:42.942368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.942738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.943127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.943156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.943491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.943857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.943888] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.944268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.944606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.944635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.945005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.945342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.945376] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.945754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.946143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.946174] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.946529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.946744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.946771] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.947121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.947489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.947518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.947889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.948293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.948322] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.948546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.948922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.948952] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.949159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.949540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.949569] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.949947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.950320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.950350] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.950608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.950977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.951006] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.951451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.951868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.951898] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.952278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.952486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.952524] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.952871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.953246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.953275] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.953489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.953855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.953886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.954261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.954631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.954663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.954882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.955261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.955290] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.955622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.955985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.956015] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.956384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.956752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.956782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.957165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.957538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.957567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.957949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.958319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.958348] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.958597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.958833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.958863] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.959272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.959494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.959530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.959896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.960228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.960256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.525 [2024-04-24 20:57:42.960476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.960834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.525 [2024-04-24 20:57:42.960864] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.525 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.961239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.961604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.961633] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.962010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.962384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.962413] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.962781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.963164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.963193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.963535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.963908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.963938] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.964316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.964736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.964766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.965156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.965503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.965531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.965907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.966276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.966304] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.966541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.966754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.966784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.967122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.967345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.967374] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.967633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.967861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.967891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.968264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.968637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.968666] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.969072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.969442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.969471] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.969848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.970216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.970245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.970442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.970826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.970856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.971262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.971630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.971658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.972028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.972265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.972292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.972657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.973001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.973031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.973407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.973627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.973656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.974022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.974269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.974297] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.974671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.975046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.975075] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.975452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.975826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.975856] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.976075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.976436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.976464] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.976847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.977178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.977209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.977607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.977820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.977852] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.978090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.978453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.978482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.978852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.979220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.979250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.979626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.980001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.980031] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.980294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.980493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.980522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.526 [2024-04-24 20:57:42.980872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.981118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.526 [2024-04-24 20:57:42.981147] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.526 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.981371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.981748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.981778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.982194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.982560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.982588] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.982957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.983162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.983191] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.983461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.983854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.983884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.984236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.984610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.984638] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.985012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.985375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.985404] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.985624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.985990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.986020] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.986398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.986769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.986799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.987046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.987291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.987320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.987689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.988083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.988114] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.988487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.988854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.988884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.989239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.989619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.989648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.990049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.990412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.990440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.990668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.991036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.991067] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.991293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.991502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.991531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.991750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.992146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.992176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.992556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.992915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.992945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.993322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.993686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.993715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.993979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.994354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.994382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.994610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.994996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.995028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.995389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.995746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.995776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.996163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.996530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.996560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.996899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.997284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.997314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.997692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.998076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.998107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.998320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.998694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.998723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.999153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.999526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:42.999553] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.527 qpair failed and we were unable to recover it. 00:26:18.527 [2024-04-24 20:57:42.999948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.527 [2024-04-24 20:57:43.000191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.000218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.000608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.000974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.001003] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.001364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.001771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.001801] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.002201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.002446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.002473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.002826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.003204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.003233] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.003596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.003940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.003971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.004187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.004574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.004604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.004867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.005223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.005256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.005631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.006029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.006060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.006451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.006666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.006694] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.007094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.007474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.007504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.007880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.008104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.008133] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.008473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.008825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.008855] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.009245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.009612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.009641] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.010037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.010404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.010434] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.010861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.011227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.011257] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.011599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.012018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.012049] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.012425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.012792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.012822] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.013190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.013533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.013561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.013926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.014307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.014335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.014586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.014948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.014979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.015228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.015594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.015624] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.016013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.016231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.016260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.016638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.017009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.017040] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.017410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.017794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.017824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.018210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.018579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.018609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.018847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.019202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.019232] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.528 qpair failed and we were unable to recover it. 00:26:18.528 [2024-04-24 20:57:43.019637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.019878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.528 [2024-04-24 20:57:43.019910] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.020292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.020712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.020752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.021180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.021550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.021586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.021958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.022205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.022234] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.022482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.022860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.022891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.023243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.023622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.023651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.024044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.024424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.024453] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.024834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.025080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.025112] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.025496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.025880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.025911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.026138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.026459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.026490] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.026858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.027230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.027260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.027516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.027874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.027906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.028286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.028652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.028682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.028799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.029118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.029148] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.029510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.029762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.029794] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.030192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.030556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.030584] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.030814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.031238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.031268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.031611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.031859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.031891] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.032167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.032515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.032545] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.032949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.033310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.033341] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.033695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.034069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.034101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.529 qpair failed and we were unable to recover it. 00:26:18.529 [2024-04-24 20:57:43.034495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.034854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.529 [2024-04-24 20:57:43.034886] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.035250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.035637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.035667] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.036030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.036400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.036430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.036798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.037052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.037082] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.037443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.037814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.037844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.038295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.038669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.038698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.038958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.039171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.039201] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.039576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.039959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.039989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.040384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.040811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.040843] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.041070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.041442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.041472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.041712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.041992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.042023] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.042376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.042614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.042644] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.042996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.043362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.043393] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.043767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.044131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.044162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.044259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.044589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.044619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.045090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.045299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.045329] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.045721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.046146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.046176] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.046624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.046842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.046875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.047256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.047497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.047528] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.047782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.048154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.048185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.048293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.048630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.048660] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.049033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.049425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.049455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.049678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.049938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.049970] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.050223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.050594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.050625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.050973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.051334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.051364] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.051622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.052003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.052039] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.052408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.052792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.052825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.053212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.053456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.053489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.530 qpair failed and we were unable to recover it. 00:26:18.530 [2024-04-24 20:57:43.053845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.054071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.530 [2024-04-24 20:57:43.054100] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.054331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.054695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.054741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.055113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.055358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.055386] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.055787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.056213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.056245] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.056659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.057032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.057062] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.057437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.057793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.057821] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.058222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.058453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.058483] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.058864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.059092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.059131] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.059533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.059904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.059934] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.060312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.060678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.060707] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.060984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.061219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.061248] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.061505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.061843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.061873] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.061975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.062355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.062384] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.062643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.063057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.063087] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.063467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.063693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.063722] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.064108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.064327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.064357] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.064700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.064942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.064971] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.065341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.065572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.065608] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.065841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.066258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.066288] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.066519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.066757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.066788] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.067040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.067402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.067431] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.067773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.068145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.068175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.068550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.068921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.531 [2024-04-24 20:57:43.068951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.531 qpair failed and we were unable to recover it. 00:26:18.531 [2024-04-24 20:57:43.069195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.069566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.069596] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.069850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.070230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.070259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.070672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.071050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.071081] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.071464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.071672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.071701] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.072098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.072460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.072495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.072882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.073207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.073236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.073620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.073985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.074016] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.074396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.074652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.074681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.075038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.075243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.075273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.075647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.075903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.075933] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.076313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.076712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.076754] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.077117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.077487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.077516] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.077751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.078197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.078226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.078580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.078956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.078987] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.079371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.079744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.079774] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.080025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.080418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.080446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.080801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.081179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.081209] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.081584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.081808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.081837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.082097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.082503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.082531] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.082790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.083002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.083030] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.083390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.083602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.083630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.083976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.084366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.084394] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.084764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.085192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.085222] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.085588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.085946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.085976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.086357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.086564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.086595] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.086973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.087340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.087368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.087717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.088115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.088146] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.532 [2024-04-24 20:57:43.088375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.088757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.532 [2024-04-24 20:57:43.088787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.532 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.089045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.089452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.089481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.089635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.089887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.089919] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.090333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.090705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.090761] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.091124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.091483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.091513] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.091899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.092263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.092293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.092697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.092948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.092976] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.093361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.093572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.093599] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.093840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.094074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.094102] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.094217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.094470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.094499] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.094723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.095090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.095119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.095343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.095699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.095743] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.096133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.096495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.096525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.096897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.097111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.097141] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.097484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.097864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.097895] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.098287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.098654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.098685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.099101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.099469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.099497] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.099855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.100077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.100105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.100437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.100813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.100844] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.101225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.101587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.101615] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.101854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.102248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.102276] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.102513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.102746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.102776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.103045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.103416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.103447] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.103834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.104207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.104236] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.533 qpair failed and we were unable to recover it. 00:26:18.533 [2024-04-24 20:57:43.104606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.533 [2024-04-24 20:57:43.104970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.105000] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.105380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.105750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.105783] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.106025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.106383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.106412] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.106794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.107024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.107052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.107433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.107804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.107834] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.108195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.108580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.108610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.109069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.109444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.109474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.109630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.109880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.109911] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.110298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.110507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.110534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.110906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.111273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.111303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.111710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.112095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.112125] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.112495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.112854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.112885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.113257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.113619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.113648] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.114058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.114433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.114462] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.114849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.115086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.115115] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.115469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.115866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.115896] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.116160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.116493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.116522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.116767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.117128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.117158] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.117541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.117911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.117940] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.118328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.118671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.118700] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.119094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.119460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.119489] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.534 qpair failed and we were unable to recover it. 00:26:18.534 [2024-04-24 20:57:43.119841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.120237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.534 [2024-04-24 20:57:43.120267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.120667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.120880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.120909] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.121340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.121708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.121748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.122173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.122551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.122582] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.122804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.123051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.123084] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.123320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.123666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.123695] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.123936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.124150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.124181] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.124596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.124835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.124867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.125106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.125489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.125518] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.125894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.126256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.126286] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.126667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.126901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.126930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.127317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.127690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.127719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.128146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.128236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.128261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.128613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.128822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.128851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.129269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.129462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.129492] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.129866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.130261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.130291] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.130658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.130874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.130905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.535 [2024-04-24 20:57:43.131314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.131678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.535 [2024-04-24 20:57:43.131708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.535 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.132113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.132475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.132507] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.132875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.133263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.133294] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.133668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.133914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.133948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.134200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.134407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.134435] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.134790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.135008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.135037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.135413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.135793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.135824] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.136205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.136592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.136622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.136971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.137352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.137382] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.137772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.138032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.138060] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.138423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.138810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.138840] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.139069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.139464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.139493] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.139762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.140088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.140118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.140493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.140892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.140922] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.141288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.141663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.141693] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.142094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.142510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.142540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.142783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.143067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.143097] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.143450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.143661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.143690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.144092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.144425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.144455] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.144866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.145109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.145138] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.145543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.145758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.145787] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.146133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.146505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.146535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.146904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.147235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.147263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.147637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.148027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.148058] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.148353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.148564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.148594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.148835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.149183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.149213] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.536 [2024-04-24 20:57:43.149435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.149822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.536 [2024-04-24 20:57:43.149853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.536 qpair failed and we were unable to recover it. 00:26:18.537 [2024-04-24 20:57:43.150233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.537 [2024-04-24 20:57:43.150583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.537 [2024-04-24 20:57:43.150612] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.537 qpair failed and we were unable to recover it. 00:26:18.537 [2024-04-24 20:57:43.150833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.804 [2024-04-24 20:57:43.151221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.804 [2024-04-24 20:57:43.151255] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.804 qpair failed and we were unable to recover it. 00:26:18.804 [2024-04-24 20:57:43.151628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.804 [2024-04-24 20:57:43.151870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.804 [2024-04-24 20:57:43.151903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.804 qpair failed and we were unable to recover it. 00:26:18.804 [2024-04-24 20:57:43.152273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.804 [2024-04-24 20:57:43.152689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.804 [2024-04-24 20:57:43.152719] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.804 qpair failed and we were unable to recover it. 00:26:18.804 [2024-04-24 20:57:43.152994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.804 [2024-04-24 20:57:43.153233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.804 [2024-04-24 20:57:43.153265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.804 qpair failed and we were unable to recover it. 00:26:18.804 [2024-04-24 20:57:43.153672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.804 [2024-04-24 20:57:43.154039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.804 [2024-04-24 20:57:43.154070] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.804 qpair failed and we were unable to recover it. 00:26:18.804 [2024-04-24 20:57:43.154446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.154810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.154841] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.155052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.155442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.155472] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.155853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.156233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.156263] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.156651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.156898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.156928] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.157333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.157704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.157746] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.158115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.158323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.158354] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.158620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.159027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.159059] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.159399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.159767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.159810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.160213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.160424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.160452] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.160707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.161131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.161162] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.161535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.161917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.161946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.162342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.162554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.162581] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.162877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.163258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.163289] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.163521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.163921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.163963] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.164332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.164700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.164741] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.165080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.165451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.165480] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.165861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.166232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.166261] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.166636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.166886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.166921] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.167287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.167652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.167681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.167928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.168126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.168154] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.168509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.168871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.168903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.169276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.169659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.169690] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.170062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.170270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.170299] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.170702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.171067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.171105] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.171478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.171843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.171877] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.805 qpair failed and we were unable to recover it. 00:26:18.805 [2024-04-24 20:57:43.172251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.172630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.805 [2024-04-24 20:57:43.172662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.173031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.173398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.173430] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.173656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.174029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.174061] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.174445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.174852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.174883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.175249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.175518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.175549] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.175772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.176127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.176156] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.176381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.176746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.176776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.177183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.177545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.177575] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.177837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.178070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.178107] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.178443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.178825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.178857] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.179263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.179627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.179658] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.180040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.180394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.180424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.180746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.180962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.180991] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.181248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.181587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.181617] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.181971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.182360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.182389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.182772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.183156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.183185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.183446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.183851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.183883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.184277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.184498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.184529] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.184885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.185256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.185292] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.185550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.185774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.185805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.186223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.186578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.186607] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.186863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.187223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.187253] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.187641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.188011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.188041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.188282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.188754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.188784] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.189192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.189510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.189539] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.189909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.190159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.190187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.190562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.190914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.806 [2024-04-24 20:57:43.190945] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.806 qpair failed and we were unable to recover it. 00:26:18.806 [2024-04-24 20:57:43.191329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.191544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.191572] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.191954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.192195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.192224] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.192588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.192958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.192989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.193273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.193511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.193540] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.193922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.194132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.194163] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.194534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.194907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.194937] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.195306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.195558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.195586] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.195964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.196322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.196351] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.196738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.196998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.197027] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.197397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.197745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.197776] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.198143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.198475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.198501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.198960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.199333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.199362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.199762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.200135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.200164] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.200388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.200623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.200652] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.201024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.201238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.201268] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.201495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.201839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.201869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.202250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.202466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.202495] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.202875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.203286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.203314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.203585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.203948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.203980] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.204226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.204439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.204467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.204817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.205061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.205090] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.205353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.205712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.205752] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.206126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.206497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.206526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.206875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.207008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.207036] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.207504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.207762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.207791] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.208043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.208433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.208461] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.208847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.209215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.209244] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.209478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.209859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.209890] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.210281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.210635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.807 [2024-04-24 20:57:43.210663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.807 qpair failed and we were unable to recover it. 00:26:18.807 [2024-04-24 20:57:43.211042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.211246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.211274] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.211509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.211905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.211935] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.212158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.212412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.212442] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.212822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.213197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.213226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.213601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.213823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.213853] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.214108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.214467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.214496] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.214851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.215092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.215122] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.215488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.215705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.215753] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.216132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.216497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.216525] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.216755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.216981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.217009] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.217410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.217779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.217810] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.218169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.218537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.218567] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.218795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.219198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.219226] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.219457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.219707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.219757] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.220116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.220327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.220355] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.220506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.220890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.220920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.221304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.221680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.221709] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.222007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.222394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.222424] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.222853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.223250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.223280] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.223541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 20:57:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:18.808 [2024-04-24 20:57:43.223766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.223796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 20:57:43 -- common/autotest_common.sh@850 -- # return 0 00:26:18.808 [2024-04-24 20:57:43.224143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 20:57:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:18.808 20:57:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:18.808 [2024-04-24 20:57:43.224525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.224554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:26:18.808 [2024-04-24 20:57:43.224918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.225115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.225142] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.225370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.225618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.808 [2024-04-24 20:57:43.225647] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.808 qpair failed and we were unable to recover it. 00:26:18.808 [2024-04-24 20:57:43.225981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.226185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.226216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.226469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.226816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.226846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.227212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.227581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.227610] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.227959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.228359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.228389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.228776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.229145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.229175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.229563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.229819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.229850] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.230114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.230480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.230509] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.230869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.231236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.231265] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.231497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.231854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.231884] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.232282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.232653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.232682] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.232934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.233362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.233392] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.233792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.234164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.234195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.234584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.234964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.234994] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.235370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.235551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.235579] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.235827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.236197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.236225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.236608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.236918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.236946] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.237322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.237687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.237715] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.238129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.238491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.238522] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.238906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.239272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.239303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.239678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.240016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.240052] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.240312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.240425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.240456] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.240701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.241149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.241180] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.809 qpair failed and we were unable to recover it. 00:26:18.809 [2024-04-24 20:57:43.241549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.241916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.809 [2024-04-24 20:57:43.241948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.242161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.242526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.242555] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.242770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.242999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.243028] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.243383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.243784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.243813] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.244074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.244438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.244469] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.244696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.245087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.245118] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.245497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.245744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.245778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.246160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.246559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.246594] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.246936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.247342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.247373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.247708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.248093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.248123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.248360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.248721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.248764] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.248870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.249072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.249101] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.249474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.249814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.249847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.250185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.250483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.250511] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.250889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.251102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.251130] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.251363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.251577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.251609] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.251967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.252329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.252359] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.252744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.253133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.253169] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.253502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.253844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.253875] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.254282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.254651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.254681] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.255049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.255289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.255318] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.255745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.256125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.256153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.256536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.256892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.256924] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.257293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.257648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.257677] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.257902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.258275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.258306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.258686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.258899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.258929] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.259184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.259547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.259577] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.259797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.260182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.260216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.260592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.260956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.260986] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.810 qpair failed and we were unable to recover it. 00:26:18.810 [2024-04-24 20:57:43.261365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.810 [2024-04-24 20:57:43.261746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.261778] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.262190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.262521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.262551] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.262922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.263284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.263315] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.263672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.264053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.264083] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.264319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.264673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.264702] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.265087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 20:57:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:18.811 [2024-04-24 20:57:43.265448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.265485] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 20:57:43 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:18.811 20:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.811 [2024-04-24 20:57:43.265881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:26:18.811 [2024-04-24 20:57:43.266248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.266278] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.266509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.266839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.266869] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.267266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.267643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.267671] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.268064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.268400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.268429] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.268649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.269045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.269078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.269493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.269853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.269885] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.270265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.270638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.270669] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.271104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.271477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.271506] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.271870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.272247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.272277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.272652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.273025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.273056] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.273273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.273633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.273662] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.274052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.274395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.274425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.274814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.275186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.275216] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.275583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.275794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.275825] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.276230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.276443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.276473] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.276860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.277225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.277256] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.277643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.278009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.278041] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.278406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.278663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.278696] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.279082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.279452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.279482] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.279863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.280243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.280273] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.280658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.281046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.281077] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.281374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.281628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.281657] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.811 qpair failed and we were unable to recover it. 00:26:18.811 [2024-04-24 20:57:43.282035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.282333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.811 [2024-04-24 20:57:43.282362] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.282762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.283136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.283166] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.283538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.283897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.283926] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.284304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.284573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.284600] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.284828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.285042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.285069] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.285487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.285696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.285723] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.286122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.286497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.286526] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.286901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.287279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.287309] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.287708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.288120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.288150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.288385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.288796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.288827] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.289215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.289576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.289604] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.289759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 Malloc0 00:26:18.812 [2024-04-24 20:57:43.290165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.290194] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 20:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.812 [2024-04-24 20:57:43.290573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.290917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.290948] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b9 20:57:43 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:18.812 0 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 20:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.812 [2024-04-24 20:57:43.291324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:26:18.812 [2024-04-24 20:57:43.291695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.291756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.292159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.292526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.292554] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.292919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.293170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.293198] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.293498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.293623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.293656] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.294063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.294459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.294487] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.294817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.295178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.295206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.295568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.295919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.295958] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.296354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.296591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.296619] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.296999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.297113] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:18.812 [2024-04-24 20:57:43.297362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.297390] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.297762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.298166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.298195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.298572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.298949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.298979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.299396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.299724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.299767] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.300179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.300539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.300568] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.301007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.301249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.301277] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.301658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.301920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.301951] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.812 qpair failed and we were unable to recover it. 00:26:18.812 [2024-04-24 20:57:43.302360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.812 [2024-04-24 20:57:43.302474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.302500] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.302961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.303336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.303366] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.303755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.304155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.304185] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.304554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.304930] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.304960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.305293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.305531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.305561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.305881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.306141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.306171] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 20:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.813 [2024-04-24 20:57:43.306413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 20:57:43 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:18.813 [2024-04-24 20:57:43.306830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.306861] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 20:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.813 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:26:18.813 [2024-04-24 20:57:43.307243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.307624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.307653] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.308052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.308298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.308326] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.308621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.308959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.308989] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.309210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.309486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.309521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.309892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.310276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.310306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.310654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.310891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.310920] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.311285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.311506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.311534] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.311888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.312130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.312157] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.312519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.312853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.312883] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.313305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.313691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.313718] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.314102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.314351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.314378] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.314764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.315188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.315218] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.315585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.315776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.315808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.316216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.316597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.316625] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.316990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.317358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.317388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.317770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.318155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.318184] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 20:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.813 [2024-04-24 20:57:43.318569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 20:57:43 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:18.813 [2024-04-24 20:57:43.318786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.318816] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 20:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.813 [2024-04-24 20:57:43.319133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:26:18.813 [2024-04-24 20:57:43.319532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.319563] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.320009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.320378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.320407] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.320783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.321017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.321044] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.813 qpair failed and we were unable to recover it. 00:26:18.813 [2024-04-24 20:57:43.321413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.813 [2024-04-24 20:57:43.321778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.321808] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.322137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.322504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.322532] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.322893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.323264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.323293] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.323540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.323834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.323865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.324236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.324610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.324640] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.325002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.325359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.325388] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.325781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.326164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.326193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.326640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.327001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.327032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.327403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.327768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.327798] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.328199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.328593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.328622] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.328867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.329111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.329140] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.329511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.329878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.329908] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 20:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.814 [2024-04-24 20:57:43.330289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 20:57:43 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:18.814 [2024-04-24 20:57:43.330673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.330708] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 20:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.814 [2024-04-24 20:57:43.331103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:26:18.814 [2024-04-24 20:57:43.331328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.331356] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.331782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.332166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.332195] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.332452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.332817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.332846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.333212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.333416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.333445] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.333817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.334130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.334160] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.334515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.334876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.334906] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.335283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.335532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.335560] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.335943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.336310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.336338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.336739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.337146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:18.814 [2024-04-24 20:57:43.337175] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcd78000b90 with addr=10.0.0.2, port=4420 00:26:18.814 qpair failed and we were unable to recover it. 00:26:18.814 [2024-04-24 20:57:43.337514] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:18.814 20:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.814 20:57:43 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:18.814 20:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.814 20:57:43 -- common/autotest_common.sh@10 -- # set +x 00:26:18.814 [2024-04-24 20:57:43.348236] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.814 [2024-04-24 20:57:43.348408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.815 [2024-04-24 20:57:43.348462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.815 [2024-04-24 20:57:43.348485] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.815 [2024-04-24 20:57:43.348505] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:18.815 [2024-04-24 20:57:43.348559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:18.815 qpair failed and we were unable to recover it. 00:26:18.815 20:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.815 20:57:43 -- host/target_disconnect.sh@58 -- # wait 2935676 00:26:18.815 [2024-04-24 20:57:43.358180] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.815 [2024-04-24 20:57:43.358285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.815 [2024-04-24 20:57:43.358326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.815 [2024-04-24 20:57:43.358344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.815 [2024-04-24 20:57:43.358358] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:18.815 [2024-04-24 20:57:43.358393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:18.815 qpair failed and we were unable to recover it. 00:26:18.815 [2024-04-24 20:57:43.368159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.815 [2024-04-24 20:57:43.368247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.815 [2024-04-24 20:57:43.368277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.815 [2024-04-24 20:57:43.368288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.815 [2024-04-24 20:57:43.368298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:18.815 [2024-04-24 20:57:43.368323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:18.815 qpair failed and we were unable to recover it. 00:26:18.815 [2024-04-24 20:57:43.378123] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.815 [2024-04-24 20:57:43.378199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.815 [2024-04-24 20:57:43.378221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.815 [2024-04-24 20:57:43.378229] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.815 [2024-04-24 20:57:43.378236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:18.815 [2024-04-24 20:57:43.378255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:18.815 qpair failed and we were unable to recover it. 00:26:18.815 [2024-04-24 20:57:43.388214] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.815 [2024-04-24 20:57:43.388309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.815 [2024-04-24 20:57:43.388330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.815 [2024-04-24 20:57:43.388339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.815 [2024-04-24 20:57:43.388346] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:18.815 [2024-04-24 20:57:43.388363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:18.815 qpair failed and we were unable to recover it. 00:26:18.815 [2024-04-24 20:57:43.398233] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.815 [2024-04-24 20:57:43.398304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.815 [2024-04-24 20:57:43.398325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.815 [2024-04-24 20:57:43.398333] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.815 [2024-04-24 20:57:43.398340] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:18.815 [2024-04-24 20:57:43.398359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:18.815 qpair failed and we were unable to recover it. 00:26:18.815 [2024-04-24 20:57:43.408205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.815 [2024-04-24 20:57:43.408280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.815 [2024-04-24 20:57:43.408302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.815 [2024-04-24 20:57:43.408310] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.815 [2024-04-24 20:57:43.408317] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:18.815 [2024-04-24 20:57:43.408335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:18.815 qpair failed and we were unable to recover it. 00:26:18.815 [2024-04-24 20:57:43.418211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.815 [2024-04-24 20:57:43.418283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.815 [2024-04-24 20:57:43.418304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.815 [2024-04-24 20:57:43.418312] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.815 [2024-04-24 20:57:43.418319] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:18.815 [2024-04-24 20:57:43.418336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:18.815 qpair failed and we were unable to recover it. 00:26:18.815 [2024-04-24 20:57:43.428178] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.815 [2024-04-24 20:57:43.428254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.815 [2024-04-24 20:57:43.428274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.815 [2024-04-24 20:57:43.428288] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.815 [2024-04-24 20:57:43.428295] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:18.815 [2024-04-24 20:57:43.428313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:18.815 qpair failed and we were unable to recover it. 00:26:18.815 [2024-04-24 20:57:43.438247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:18.815 [2024-04-24 20:57:43.438325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:18.815 [2024-04-24 20:57:43.438346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:18.815 [2024-04-24 20:57:43.438355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:18.815 [2024-04-24 20:57:43.438362] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:18.815 [2024-04-24 20:57:43.438379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:18.815 qpair failed and we were unable to recover it. 00:26:19.079 [2024-04-24 20:57:43.448284] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.079 [2024-04-24 20:57:43.448365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.079 [2024-04-24 20:57:43.448386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.079 [2024-04-24 20:57:43.448395] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.079 [2024-04-24 20:57:43.448404] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.079 [2024-04-24 20:57:43.448420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.079 qpair failed and we were unable to recover it. 00:26:19.079 [2024-04-24 20:57:43.458310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.079 [2024-04-24 20:57:43.458374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.079 [2024-04-24 20:57:43.458396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.079 [2024-04-24 20:57:43.458404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.079 [2024-04-24 20:57:43.458411] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.079 [2024-04-24 20:57:43.458429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.079 qpair failed and we were unable to recover it. 00:26:19.079 [2024-04-24 20:57:43.468461] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.079 [2024-04-24 20:57:43.468542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.079 [2024-04-24 20:57:43.468568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.079 [2024-04-24 20:57:43.468576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.079 [2024-04-24 20:57:43.468584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.079 [2024-04-24 20:57:43.468605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.079 qpair failed and we were unable to recover it. 00:26:19.079 [2024-04-24 20:57:43.478408] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.079 [2024-04-24 20:57:43.478486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.079 [2024-04-24 20:57:43.478524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.079 [2024-04-24 20:57:43.478534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.079 [2024-04-24 20:57:43.478541] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.079 [2024-04-24 20:57:43.478563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.079 qpair failed and we were unable to recover it. 00:26:19.079 [2024-04-24 20:57:43.488432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.079 [2024-04-24 20:57:43.488511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.079 [2024-04-24 20:57:43.488546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.079 [2024-04-24 20:57:43.488556] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.079 [2024-04-24 20:57:43.488564] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.079 [2024-04-24 20:57:43.488586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.079 qpair failed and we were unable to recover it. 00:26:19.079 [2024-04-24 20:57:43.498423] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.079 [2024-04-24 20:57:43.498493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.079 [2024-04-24 20:57:43.498516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.079 [2024-04-24 20:57:43.498525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.079 [2024-04-24 20:57:43.498531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.079 [2024-04-24 20:57:43.498551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.079 qpair failed and we were unable to recover it. 00:26:19.079 [2024-04-24 20:57:43.508533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.079 [2024-04-24 20:57:43.508621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.079 [2024-04-24 20:57:43.508642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.079 [2024-04-24 20:57:43.508650] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.079 [2024-04-24 20:57:43.508658] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.079 [2024-04-24 20:57:43.508675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.079 qpair failed and we were unable to recover it. 00:26:19.079 [2024-04-24 20:57:43.518482] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.079 [2024-04-24 20:57:43.518554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.079 [2024-04-24 20:57:43.518581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.079 [2024-04-24 20:57:43.518589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.079 [2024-04-24 20:57:43.518597] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.079 [2024-04-24 20:57:43.518615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.079 qpair failed and we were unable to recover it. 00:26:19.079 [2024-04-24 20:57:43.528651] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.079 [2024-04-24 20:57:43.528747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.079 [2024-04-24 20:57:43.528769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.528777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.528783] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.528802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.538612] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.538685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.538706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.538715] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.538722] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.538748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.548661] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.548750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.548771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.548779] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.548786] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.548803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.558691] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.558771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.558792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.558802] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.558810] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.558832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.568614] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.568688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.568708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.568717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.568731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.568750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.578541] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.578610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.578631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.578639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.578648] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.578666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.588738] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.588840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.588861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.588869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.588878] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.588894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.598749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.598811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.598832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.598840] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.598848] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.598865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.608723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.608792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.608817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.608826] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.608833] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.608849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.618795] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.618860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.618880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.618888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.618894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.618911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.628848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.628933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.628954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.628963] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.628970] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.628986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.638890] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.638965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.638985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.638993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.639000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.639017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.648916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.648988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.649008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.649016] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.649023] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.649044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.080 [2024-04-24 20:57:43.658925] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.080 [2024-04-24 20:57:43.658994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.080 [2024-04-24 20:57:43.659014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.080 [2024-04-24 20:57:43.659021] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.080 [2024-04-24 20:57:43.659027] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.080 [2024-04-24 20:57:43.659044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.080 qpair failed and we were unable to recover it. 00:26:19.081 [2024-04-24 20:57:43.668982] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.081 [2024-04-24 20:57:43.669066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.081 [2024-04-24 20:57:43.669086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.081 [2024-04-24 20:57:43.669094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.081 [2024-04-24 20:57:43.669101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.081 [2024-04-24 20:57:43.669117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.081 qpair failed and we were unable to recover it. 00:26:19.081 [2024-04-24 20:57:43.679026] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.081 [2024-04-24 20:57:43.679097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.081 [2024-04-24 20:57:43.679117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.081 [2024-04-24 20:57:43.679125] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.081 [2024-04-24 20:57:43.679131] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.081 [2024-04-24 20:57:43.679149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.081 qpair failed and we were unable to recover it. 00:26:19.081 [2024-04-24 20:57:43.689012] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.081 [2024-04-24 20:57:43.689071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.081 [2024-04-24 20:57:43.689091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.081 [2024-04-24 20:57:43.689099] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.081 [2024-04-24 20:57:43.689106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.081 [2024-04-24 20:57:43.689122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.081 qpair failed and we were unable to recover it. 00:26:19.081 [2024-04-24 20:57:43.699057] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.081 [2024-04-24 20:57:43.699129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.081 [2024-04-24 20:57:43.699149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.081 [2024-04-24 20:57:43.699158] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.081 [2024-04-24 20:57:43.699164] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.081 [2024-04-24 20:57:43.699181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.081 qpair failed and we were unable to recover it. 00:26:19.081 [2024-04-24 20:57:43.708962] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.081 [2024-04-24 20:57:43.709084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.081 [2024-04-24 20:57:43.709107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.081 [2024-04-24 20:57:43.709115] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.081 [2024-04-24 20:57:43.709123] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.081 [2024-04-24 20:57:43.709140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.081 qpair failed and we were unable to recover it. 00:26:19.345 [2024-04-24 20:57:43.719105] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.345 [2024-04-24 20:57:43.719175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.345 [2024-04-24 20:57:43.719197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.346 [2024-04-24 20:57:43.719205] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.346 [2024-04-24 20:57:43.719212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.346 [2024-04-24 20:57:43.719230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-24 20:57:43.729100] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.346 [2024-04-24 20:57:43.729163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.346 [2024-04-24 20:57:43.729184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.346 [2024-04-24 20:57:43.729192] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.346 [2024-04-24 20:57:43.729199] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.346 [2024-04-24 20:57:43.729216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-24 20:57:43.739054] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.346 [2024-04-24 20:57:43.739120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.346 [2024-04-24 20:57:43.739141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.346 [2024-04-24 20:57:43.739149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.346 [2024-04-24 20:57:43.739166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.346 [2024-04-24 20:57:43.739183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-24 20:57:43.749205] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.346 [2024-04-24 20:57:43.749322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.346 [2024-04-24 20:57:43.749344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.346 [2024-04-24 20:57:43.749352] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.346 [2024-04-24 20:57:43.749359] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.346 [2024-04-24 20:57:43.749376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-24 20:57:43.759235] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.346 [2024-04-24 20:57:43.759306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.346 [2024-04-24 20:57:43.759327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.346 [2024-04-24 20:57:43.759335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.346 [2024-04-24 20:57:43.759342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.346 [2024-04-24 20:57:43.759359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-24 20:57:43.769133] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.346 [2024-04-24 20:57:43.769213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.346 [2024-04-24 20:57:43.769234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.346 [2024-04-24 20:57:43.769242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.346 [2024-04-24 20:57:43.769249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.346 [2024-04-24 20:57:43.769266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-24 20:57:43.779289] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.346 [2024-04-24 20:57:43.779353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.346 [2024-04-24 20:57:43.779373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.346 [2024-04-24 20:57:43.779381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.346 [2024-04-24 20:57:43.779388] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.346 [2024-04-24 20:57:43.779405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-24 20:57:43.789328] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.346 [2024-04-24 20:57:43.789454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.346 [2024-04-24 20:57:43.789476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.346 [2024-04-24 20:57:43.789484] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.346 [2024-04-24 20:57:43.789492] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.346 [2024-04-24 20:57:43.789508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-24 20:57:43.799356] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.346 [2024-04-24 20:57:43.799424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.346 [2024-04-24 20:57:43.799445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.346 [2024-04-24 20:57:43.799454] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.346 [2024-04-24 20:57:43.799461] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.346 [2024-04-24 20:57:43.799478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-24 20:57:43.809358] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.346 [2024-04-24 20:57:43.809429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.346 [2024-04-24 20:57:43.809450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.346 [2024-04-24 20:57:43.809459] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.346 [2024-04-24 20:57:43.809465] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.346 [2024-04-24 20:57:43.809484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-24 20:57:43.819408] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.346 [2024-04-24 20:57:43.819473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.346 [2024-04-24 20:57:43.819493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.346 [2024-04-24 20:57:43.819502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.346 [2024-04-24 20:57:43.819509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.346 [2024-04-24 20:57:43.819526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.346 qpair failed and we were unable to recover it. 00:26:19.346 [2024-04-24 20:57:43.829417] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.346 [2024-04-24 20:57:43.829498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.346 [2024-04-24 20:57:43.829520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.346 [2024-04-24 20:57:43.829534] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.347 [2024-04-24 20:57:43.829543] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.347 [2024-04-24 20:57:43.829560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-24 20:57:43.839456] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.347 [2024-04-24 20:57:43.839522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.347 [2024-04-24 20:57:43.839543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.347 [2024-04-24 20:57:43.839551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.347 [2024-04-24 20:57:43.839558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.347 [2024-04-24 20:57:43.839576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-24 20:57:43.849463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.347 [2024-04-24 20:57:43.849533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.347 [2024-04-24 20:57:43.849553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.347 [2024-04-24 20:57:43.849562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.347 [2024-04-24 20:57:43.849569] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.347 [2024-04-24 20:57:43.849586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-24 20:57:43.859572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.347 [2024-04-24 20:57:43.859637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.347 [2024-04-24 20:57:43.859658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.347 [2024-04-24 20:57:43.859666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.347 [2024-04-24 20:57:43.859672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.347 [2024-04-24 20:57:43.859688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-24 20:57:43.869564] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.347 [2024-04-24 20:57:43.869646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.347 [2024-04-24 20:57:43.869666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.347 [2024-04-24 20:57:43.869674] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.347 [2024-04-24 20:57:43.869681] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.347 [2024-04-24 20:57:43.869698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-24 20:57:43.879573] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.347 [2024-04-24 20:57:43.879642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.347 [2024-04-24 20:57:43.879663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.347 [2024-04-24 20:57:43.879670] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.347 [2024-04-24 20:57:43.879677] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.347 [2024-04-24 20:57:43.879693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-24 20:57:43.889601] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.347 [2024-04-24 20:57:43.889666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.347 [2024-04-24 20:57:43.889687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.347 [2024-04-24 20:57:43.889695] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.347 [2024-04-24 20:57:43.889701] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.347 [2024-04-24 20:57:43.889718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-24 20:57:43.899642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.347 [2024-04-24 20:57:43.899712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.347 [2024-04-24 20:57:43.899739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.347 [2024-04-24 20:57:43.899748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.347 [2024-04-24 20:57:43.899755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.347 [2024-04-24 20:57:43.899773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-24 20:57:43.909686] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.347 [2024-04-24 20:57:43.909774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.347 [2024-04-24 20:57:43.909796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.347 [2024-04-24 20:57:43.909804] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.347 [2024-04-24 20:57:43.909812] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.347 [2024-04-24 20:57:43.909830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-24 20:57:43.919643] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.347 [2024-04-24 20:57:43.919730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.347 [2024-04-24 20:57:43.919755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.347 [2024-04-24 20:57:43.919764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.347 [2024-04-24 20:57:43.919773] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.347 [2024-04-24 20:57:43.919791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-24 20:57:43.929678] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.347 [2024-04-24 20:57:43.929747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.347 [2024-04-24 20:57:43.929767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.347 [2024-04-24 20:57:43.929775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.347 [2024-04-24 20:57:43.929782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.347 [2024-04-24 20:57:43.929800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-24 20:57:43.939773] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.347 [2024-04-24 20:57:43.939850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.347 [2024-04-24 20:57:43.939871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.347 [2024-04-24 20:57:43.939879] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.347 [2024-04-24 20:57:43.939886] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.347 [2024-04-24 20:57:43.939902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.347 qpair failed and we were unable to recover it. 00:26:19.347 [2024-04-24 20:57:43.949809] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.347 [2024-04-24 20:57:43.949893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.347 [2024-04-24 20:57:43.949914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.348 [2024-04-24 20:57:43.949922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.348 [2024-04-24 20:57:43.949929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.348 [2024-04-24 20:57:43.949946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-24 20:57:43.959826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.348 [2024-04-24 20:57:43.959893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.348 [2024-04-24 20:57:43.959914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.348 [2024-04-24 20:57:43.959922] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.348 [2024-04-24 20:57:43.959929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.348 [2024-04-24 20:57:43.959953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-24 20:57:43.969723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.348 [2024-04-24 20:57:43.969792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.348 [2024-04-24 20:57:43.969813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.348 [2024-04-24 20:57:43.969820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.348 [2024-04-24 20:57:43.969827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.348 [2024-04-24 20:57:43.969844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.348 [2024-04-24 20:57:43.979767] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.348 [2024-04-24 20:57:43.979843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.348 [2024-04-24 20:57:43.979866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.348 [2024-04-24 20:57:43.979875] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.348 [2024-04-24 20:57:43.979882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.348 [2024-04-24 20:57:43.979900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.348 qpair failed and we were unable to recover it. 00:26:19.611 [2024-04-24 20:57:43.989924] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.611 [2024-04-24 20:57:43.990010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.611 [2024-04-24 20:57:43.990031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.611 [2024-04-24 20:57:43.990040] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.611 [2024-04-24 20:57:43.990048] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.611 [2024-04-24 20:57:43.990065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.611 qpair failed and we were unable to recover it. 00:26:19.611 [2024-04-24 20:57:43.999928] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.611 [2024-04-24 20:57:44.000000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.611 [2024-04-24 20:57:44.000021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.611 [2024-04-24 20:57:44.000030] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.611 [2024-04-24 20:57:44.000038] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.611 [2024-04-24 20:57:44.000055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.611 qpair failed and we were unable to recover it. 00:26:19.611 [2024-04-24 20:57:44.009849] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.611 [2024-04-24 20:57:44.009930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.611 [2024-04-24 20:57:44.009956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.611 [2024-04-24 20:57:44.009964] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.611 [2024-04-24 20:57:44.009972] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.611 [2024-04-24 20:57:44.009989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.611 qpair failed and we were unable to recover it. 00:26:19.611 [2024-04-24 20:57:44.020031] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.611 [2024-04-24 20:57:44.020102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.611 [2024-04-24 20:57:44.020124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.611 [2024-04-24 20:57:44.020132] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.611 [2024-04-24 20:57:44.020139] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.611 [2024-04-24 20:57:44.020156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.611 qpair failed and we were unable to recover it. 00:26:19.612 [2024-04-24 20:57:44.029912] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.612 [2024-04-24 20:57:44.029994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.612 [2024-04-24 20:57:44.030016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.612 [2024-04-24 20:57:44.030024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.612 [2024-04-24 20:57:44.030031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.612 [2024-04-24 20:57:44.030049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.612 qpair failed and we were unable to recover it. 00:26:19.612 [2024-04-24 20:57:44.039955] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.612 [2024-04-24 20:57:44.040061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.612 [2024-04-24 20:57:44.040085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.612 [2024-04-24 20:57:44.040094] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.612 [2024-04-24 20:57:44.040101] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.612 [2024-04-24 20:57:44.040119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.612 qpair failed and we were unable to recover it. 00:26:19.612 [2024-04-24 20:57:44.050083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.612 [2024-04-24 20:57:44.050147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.612 [2024-04-24 20:57:44.050168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.612 [2024-04-24 20:57:44.050177] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.612 [2024-04-24 20:57:44.050183] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.612 [2024-04-24 20:57:44.050207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.612 qpair failed and we were unable to recover it. 00:26:19.612 [2024-04-24 20:57:44.060063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.612 [2024-04-24 20:57:44.060179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.612 [2024-04-24 20:57:44.060201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.612 [2024-04-24 20:57:44.060209] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.612 [2024-04-24 20:57:44.060216] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.612 [2024-04-24 20:57:44.060233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.612 qpair failed and we were unable to recover it. 00:26:19.612 [2024-04-24 20:57:44.070186] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.612 [2024-04-24 20:57:44.070315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.612 [2024-04-24 20:57:44.070336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.612 [2024-04-24 20:57:44.070344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.612 [2024-04-24 20:57:44.070351] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.612 [2024-04-24 20:57:44.070368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.612 qpair failed and we were unable to recover it. 00:26:19.612 [2024-04-24 20:57:44.080224] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.612 [2024-04-24 20:57:44.080291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.612 [2024-04-24 20:57:44.080312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.612 [2024-04-24 20:57:44.080320] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.612 [2024-04-24 20:57:44.080327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.612 [2024-04-24 20:57:44.080345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.612 qpair failed and we were unable to recover it. 00:26:19.612 [2024-04-24 20:57:44.090252] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.612 [2024-04-24 20:57:44.090325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.612 [2024-04-24 20:57:44.090346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.612 [2024-04-24 20:57:44.090355] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.612 [2024-04-24 20:57:44.090363] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.612 [2024-04-24 20:57:44.090380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.612 qpair failed and we were unable to recover it. 00:26:19.612 [2024-04-24 20:57:44.100286] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.612 [2024-04-24 20:57:44.100364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.612 [2024-04-24 20:57:44.100391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.612 [2024-04-24 20:57:44.100400] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.612 [2024-04-24 20:57:44.100408] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.612 [2024-04-24 20:57:44.100424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.612 qpair failed and we were unable to recover it. 00:26:19.612 [2024-04-24 20:57:44.110327] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.612 [2024-04-24 20:57:44.110406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.612 [2024-04-24 20:57:44.110427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.612 [2024-04-24 20:57:44.110435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.612 [2024-04-24 20:57:44.110442] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.612 [2024-04-24 20:57:44.110458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.613 qpair failed and we were unable to recover it. 00:26:19.613 [2024-04-24 20:57:44.120350] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.613 [2024-04-24 20:57:44.120428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.613 [2024-04-24 20:57:44.120463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.613 [2024-04-24 20:57:44.120474] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.613 [2024-04-24 20:57:44.120482] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.613 [2024-04-24 20:57:44.120504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.613 qpair failed and we were unable to recover it. 00:26:19.613 [2024-04-24 20:57:44.130431] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.613 [2024-04-24 20:57:44.130536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.613 [2024-04-24 20:57:44.130564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.613 [2024-04-24 20:57:44.130577] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.613 [2024-04-24 20:57:44.130584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.613 [2024-04-24 20:57:44.130604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.613 qpair failed and we were unable to recover it. 00:26:19.613 [2024-04-24 20:57:44.140366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.613 [2024-04-24 20:57:44.140431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.613 [2024-04-24 20:57:44.140454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.613 [2024-04-24 20:57:44.140462] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.613 [2024-04-24 20:57:44.140475] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.613 [2024-04-24 20:57:44.140495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.613 qpair failed and we were unable to recover it. 00:26:19.613 [2024-04-24 20:57:44.150479] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.613 [2024-04-24 20:57:44.150562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.613 [2024-04-24 20:57:44.150584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.613 [2024-04-24 20:57:44.150592] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.613 [2024-04-24 20:57:44.150599] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.613 [2024-04-24 20:57:44.150618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.613 qpair failed and we were unable to recover it. 00:26:19.613 [2024-04-24 20:57:44.160475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.613 [2024-04-24 20:57:44.160544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.613 [2024-04-24 20:57:44.160565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.613 [2024-04-24 20:57:44.160573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.613 [2024-04-24 20:57:44.160580] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.613 [2024-04-24 20:57:44.160599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.613 qpair failed and we were unable to recover it. 00:26:19.613 [2024-04-24 20:57:44.170508] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.613 [2024-04-24 20:57:44.170577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.613 [2024-04-24 20:57:44.170598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.613 [2024-04-24 20:57:44.170606] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.613 [2024-04-24 20:57:44.170613] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.613 [2024-04-24 20:57:44.170631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.613 qpair failed and we were unable to recover it. 00:26:19.613 [2024-04-24 20:57:44.180625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.613 [2024-04-24 20:57:44.180690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.613 [2024-04-24 20:57:44.180711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.613 [2024-04-24 20:57:44.180718] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.613 [2024-04-24 20:57:44.180732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.613 [2024-04-24 20:57:44.180751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.613 qpair failed and we were unable to recover it. 00:26:19.613 [2024-04-24 20:57:44.190579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.613 [2024-04-24 20:57:44.190668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.613 [2024-04-24 20:57:44.190689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.613 [2024-04-24 20:57:44.190698] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.613 [2024-04-24 20:57:44.190704] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.613 [2024-04-24 20:57:44.190722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.613 qpair failed and we were unable to recover it. 00:26:19.613 [2024-04-24 20:57:44.200463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.613 [2024-04-24 20:57:44.200533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.613 [2024-04-24 20:57:44.200554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.613 [2024-04-24 20:57:44.200562] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.613 [2024-04-24 20:57:44.200568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.614 [2024-04-24 20:57:44.200586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.614 qpair failed and we were unable to recover it. 00:26:19.614 [2024-04-24 20:57:44.210640] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.614 [2024-04-24 20:57:44.210705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.614 [2024-04-24 20:57:44.210733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.614 [2024-04-24 20:57:44.210741] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.614 [2024-04-24 20:57:44.210748] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.614 [2024-04-24 20:57:44.210766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.614 qpair failed and we were unable to recover it. 00:26:19.614 [2024-04-24 20:57:44.220665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.614 [2024-04-24 20:57:44.220744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.614 [2024-04-24 20:57:44.220765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.614 [2024-04-24 20:57:44.220774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.614 [2024-04-24 20:57:44.220782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.614 [2024-04-24 20:57:44.220800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.614 qpair failed and we were unable to recover it. 00:26:19.614 [2024-04-24 20:57:44.230753] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.614 [2024-04-24 20:57:44.230861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.614 [2024-04-24 20:57:44.230880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.614 [2024-04-24 20:57:44.230894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.614 [2024-04-24 20:57:44.230901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.614 [2024-04-24 20:57:44.230919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.614 qpair failed and we were unable to recover it. 00:26:19.614 [2024-04-24 20:57:44.240723] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.614 [2024-04-24 20:57:44.240790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.614 [2024-04-24 20:57:44.240810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.614 [2024-04-24 20:57:44.240819] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.614 [2024-04-24 20:57:44.240826] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.614 [2024-04-24 20:57:44.240843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.614 qpair failed and we were unable to recover it. 00:26:19.877 [2024-04-24 20:57:44.250772] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.877 [2024-04-24 20:57:44.250835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.877 [2024-04-24 20:57:44.250855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.877 [2024-04-24 20:57:44.250863] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.877 [2024-04-24 20:57:44.250870] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.877 [2024-04-24 20:57:44.250887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.877 qpair failed and we were unable to recover it. 00:26:19.877 [2024-04-24 20:57:44.260820] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.877 [2024-04-24 20:57:44.260885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.877 [2024-04-24 20:57:44.260905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.877 [2024-04-24 20:57:44.260914] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.877 [2024-04-24 20:57:44.260921] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.877 [2024-04-24 20:57:44.260938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.877 qpair failed and we were unable to recover it. 00:26:19.877 [2024-04-24 20:57:44.270860] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.877 [2024-04-24 20:57:44.270945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.877 [2024-04-24 20:57:44.270966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.877 [2024-04-24 20:57:44.270974] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.877 [2024-04-24 20:57:44.270982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.877 [2024-04-24 20:57:44.271001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.877 qpair failed and we were unable to recover it. 00:26:19.877 [2024-04-24 20:57:44.280849] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.877 [2024-04-24 20:57:44.280916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.877 [2024-04-24 20:57:44.280937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.280945] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.280953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.280969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.290945] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.291058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.291079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.291087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.291094] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.291111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.300937] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.301019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.301040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.301048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.301055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.301072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.311005] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.311086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.311106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.311114] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.311120] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.311138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.320994] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.321058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.321078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.321091] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.321098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.321114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.331030] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.331094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.331115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.331123] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.331130] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.331147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.341072] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.341145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.341165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.341173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.341180] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.341198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.351111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.351190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.351212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.351220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.351227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.351245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.361083] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.361148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.361167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.361175] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.361182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.361200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.371156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.371256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.371277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.371286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.371294] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.371310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.381196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.381265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.381286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.381294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.381301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.381317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.391229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.391309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.391330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.391338] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.391344] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.391362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.401260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.401335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.401355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.401363] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.401372] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.878 [2024-04-24 20:57:44.401389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.878 qpair failed and we were unable to recover it. 00:26:19.878 [2024-04-24 20:57:44.411278] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.878 [2024-04-24 20:57:44.411350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.878 [2024-04-24 20:57:44.411375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.878 [2024-04-24 20:57:44.411383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.878 [2024-04-24 20:57:44.411391] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.879 [2024-04-24 20:57:44.411407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.879 qpair failed and we were unable to recover it. 00:26:19.879 [2024-04-24 20:57:44.421310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.879 [2024-04-24 20:57:44.421380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.879 [2024-04-24 20:57:44.421401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.879 [2024-04-24 20:57:44.421410] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.879 [2024-04-24 20:57:44.421417] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.879 [2024-04-24 20:57:44.421435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.879 qpair failed and we were unable to recover it. 00:26:19.879 [2024-04-24 20:57:44.431246] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.879 [2024-04-24 20:57:44.431319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.879 [2024-04-24 20:57:44.431340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.879 [2024-04-24 20:57:44.431348] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.879 [2024-04-24 20:57:44.431355] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.879 [2024-04-24 20:57:44.431371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.879 qpair failed and we were unable to recover it. 00:26:19.879 [2024-04-24 20:57:44.441386] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.879 [2024-04-24 20:57:44.441459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.879 [2024-04-24 20:57:44.441480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.879 [2024-04-24 20:57:44.441488] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.879 [2024-04-24 20:57:44.441497] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.879 [2024-04-24 20:57:44.441515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.879 qpair failed and we were unable to recover it. 00:26:19.879 [2024-04-24 20:57:44.451414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.879 [2024-04-24 20:57:44.451487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.879 [2024-04-24 20:57:44.451523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.879 [2024-04-24 20:57:44.451533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.879 [2024-04-24 20:57:44.451542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.879 [2024-04-24 20:57:44.451576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.879 qpair failed and we were unable to recover it. 00:26:19.879 [2024-04-24 20:57:44.461460] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.879 [2024-04-24 20:57:44.461535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.879 [2024-04-24 20:57:44.461570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.879 [2024-04-24 20:57:44.461580] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.879 [2024-04-24 20:57:44.461587] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.879 [2024-04-24 20:57:44.461609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.879 qpair failed and we were unable to recover it. 00:26:19.879 [2024-04-24 20:57:44.471381] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.879 [2024-04-24 20:57:44.471467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.879 [2024-04-24 20:57:44.471490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.879 [2024-04-24 20:57:44.471499] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.879 [2024-04-24 20:57:44.471508] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.879 [2024-04-24 20:57:44.471528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.879 qpair failed and we were unable to recover it. 00:26:19.879 [2024-04-24 20:57:44.481512] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.879 [2024-04-24 20:57:44.481625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.879 [2024-04-24 20:57:44.481662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.879 [2024-04-24 20:57:44.481672] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.879 [2024-04-24 20:57:44.481679] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.879 [2024-04-24 20:57:44.481702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.879 qpair failed and we were unable to recover it. 00:26:19.879 [2024-04-24 20:57:44.491529] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.879 [2024-04-24 20:57:44.491595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.879 [2024-04-24 20:57:44.491618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.879 [2024-04-24 20:57:44.491626] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.879 [2024-04-24 20:57:44.491633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.879 [2024-04-24 20:57:44.491653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.879 qpair failed and we were unable to recover it. 00:26:19.879 [2024-04-24 20:57:44.501591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.879 [2024-04-24 20:57:44.501659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.879 [2024-04-24 20:57:44.501688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.879 [2024-04-24 20:57:44.501697] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.879 [2024-04-24 20:57:44.501706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.879 [2024-04-24 20:57:44.501732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.879 qpair failed and we were unable to recover it. 00:26:19.879 [2024-04-24 20:57:44.511601] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:19.879 [2024-04-24 20:57:44.511684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:19.879 [2024-04-24 20:57:44.511705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:19.879 [2024-04-24 20:57:44.511713] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:19.879 [2024-04-24 20:57:44.511719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:19.879 [2024-04-24 20:57:44.511745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:19.879 qpair failed and we were unable to recover it. 00:26:20.143 [2024-04-24 20:57:44.521617] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.143 [2024-04-24 20:57:44.521695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.143 [2024-04-24 20:57:44.521716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.143 [2024-04-24 20:57:44.521724] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.143 [2024-04-24 20:57:44.521738] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.143 [2024-04-24 20:57:44.521756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.143 qpair failed and we were unable to recover it. 00:26:20.143 [2024-04-24 20:57:44.531667] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.143 [2024-04-24 20:57:44.531740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.143 [2024-04-24 20:57:44.531762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.143 [2024-04-24 20:57:44.531771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.143 [2024-04-24 20:57:44.531780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.143 [2024-04-24 20:57:44.531799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.143 qpair failed and we were unable to recover it. 00:26:20.143 [2024-04-24 20:57:44.541737] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.143 [2024-04-24 20:57:44.541809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.143 [2024-04-24 20:57:44.541830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.143 [2024-04-24 20:57:44.541838] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.143 [2024-04-24 20:57:44.541851] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.143 [2024-04-24 20:57:44.541870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.143 qpair failed and we were unable to recover it. 00:26:20.143 [2024-04-24 20:57:44.551751] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.143 [2024-04-24 20:57:44.551837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.143 [2024-04-24 20:57:44.551857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.143 [2024-04-24 20:57:44.551866] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.143 [2024-04-24 20:57:44.551872] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.143 [2024-04-24 20:57:44.551889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.143 qpair failed and we were unable to recover it. 00:26:20.143 [2024-04-24 20:57:44.561775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.143 [2024-04-24 20:57:44.561841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.143 [2024-04-24 20:57:44.561861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.143 [2024-04-24 20:57:44.561869] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.143 [2024-04-24 20:57:44.561876] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.143 [2024-04-24 20:57:44.561893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.143 qpair failed and we were unable to recover it. 00:26:20.143 [2024-04-24 20:57:44.571670] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.143 [2024-04-24 20:57:44.571778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.143 [2024-04-24 20:57:44.571800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.143 [2024-04-24 20:57:44.571809] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.143 [2024-04-24 20:57:44.571815] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.143 [2024-04-24 20:57:44.571833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.143 qpair failed and we were unable to recover it. 00:26:20.143 [2024-04-24 20:57:44.581855] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.143 [2024-04-24 20:57:44.581923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.143 [2024-04-24 20:57:44.581943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.143 [2024-04-24 20:57:44.581952] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.143 [2024-04-24 20:57:44.581959] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.143 [2024-04-24 20:57:44.581975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.143 qpair failed and we were unable to recover it. 00:26:20.143 [2024-04-24 20:57:44.591894] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.143 [2024-04-24 20:57:44.592002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.144 [2024-04-24 20:57:44.592023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.144 [2024-04-24 20:57:44.592032] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.144 [2024-04-24 20:57:44.592039] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.144 [2024-04-24 20:57:44.592056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.144 qpair failed and we were unable to recover it. 00:26:20.144 [2024-04-24 20:57:44.601904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.144 [2024-04-24 20:57:44.601969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.144 [2024-04-24 20:57:44.601990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.144 [2024-04-24 20:57:44.601998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.144 [2024-04-24 20:57:44.602005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.144 [2024-04-24 20:57:44.602023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.144 qpair failed and we were unable to recover it. 00:26:20.144 [2024-04-24 20:57:44.611904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.144 [2024-04-24 20:57:44.611972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.144 [2024-04-24 20:57:44.611993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.144 [2024-04-24 20:57:44.612000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.144 [2024-04-24 20:57:44.612007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.144 [2024-04-24 20:57:44.612025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.144 qpair failed and we were unable to recover it. 00:26:20.144 [2024-04-24 20:57:44.621972] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.144 [2024-04-24 20:57:44.622040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.144 [2024-04-24 20:57:44.622061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.144 [2024-04-24 20:57:44.622069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.144 [2024-04-24 20:57:44.622076] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.144 [2024-04-24 20:57:44.622093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.144 qpair failed and we were unable to recover it. 00:26:20.144 [2024-04-24 20:57:44.632021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.144 [2024-04-24 20:57:44.632102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.144 [2024-04-24 20:57:44.632123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.144 [2024-04-24 20:57:44.632136] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.144 [2024-04-24 20:57:44.632144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.144 [2024-04-24 20:57:44.632161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.144 qpair failed and we were unable to recover it. 00:26:20.144 [2024-04-24 20:57:44.642016] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.144 [2024-04-24 20:57:44.642079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.144 [2024-04-24 20:57:44.642099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.144 [2024-04-24 20:57:44.642107] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.144 [2024-04-24 20:57:44.642114] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.144 [2024-04-24 20:57:44.642131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.144 qpair failed and we were unable to recover it. 00:26:20.144 [2024-04-24 20:57:44.652058] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.144 [2024-04-24 20:57:44.652128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.144 [2024-04-24 20:57:44.652150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.144 [2024-04-24 20:57:44.652159] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.144 [2024-04-24 20:57:44.652167] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.144 [2024-04-24 20:57:44.652185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.144 qpair failed and we were unable to recover it. 00:26:20.144 [2024-04-24 20:57:44.662093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.144 [2024-04-24 20:57:44.662172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.144 [2024-04-24 20:57:44.662194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.144 [2024-04-24 20:57:44.662202] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.144 [2024-04-24 20:57:44.662211] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.144 [2024-04-24 20:57:44.662228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.144 qpair failed and we were unable to recover it. 00:26:20.144 [2024-04-24 20:57:44.672137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.144 [2024-04-24 20:57:44.672220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.144 [2024-04-24 20:57:44.672241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.144 [2024-04-24 20:57:44.672249] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.144 [2024-04-24 20:57:44.672256] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.144 [2024-04-24 20:57:44.672275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.144 qpair failed and we were unable to recover it. 00:26:20.144 [2024-04-24 20:57:44.682148] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.144 [2024-04-24 20:57:44.682220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.144 [2024-04-24 20:57:44.682242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.144 [2024-04-24 20:57:44.682250] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.144 [2024-04-24 20:57:44.682259] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.144 [2024-04-24 20:57:44.682277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.144 qpair failed and we were unable to recover it. 00:26:20.144 [2024-04-24 20:57:44.692191] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.144 [2024-04-24 20:57:44.692257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.144 [2024-04-24 20:57:44.692277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.144 [2024-04-24 20:57:44.692286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.144 [2024-04-24 20:57:44.692293] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.144 [2024-04-24 20:57:44.692311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.144 qpair failed and we were unable to recover it. 00:26:20.144 [2024-04-24 20:57:44.702229] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.145 [2024-04-24 20:57:44.702299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.145 [2024-04-24 20:57:44.702319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.145 [2024-04-24 20:57:44.702327] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.145 [2024-04-24 20:57:44.702335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.145 [2024-04-24 20:57:44.702352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.145 qpair failed and we were unable to recover it. 00:26:20.145 [2024-04-24 20:57:44.712155] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.145 [2024-04-24 20:57:44.712251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.145 [2024-04-24 20:57:44.712275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.145 [2024-04-24 20:57:44.712284] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.145 [2024-04-24 20:57:44.712291] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.145 [2024-04-24 20:57:44.712309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.145 qpair failed and we were unable to recover it. 00:26:20.145 [2024-04-24 20:57:44.722297] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.145 [2024-04-24 20:57:44.722376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.145 [2024-04-24 20:57:44.722398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.145 [2024-04-24 20:57:44.722413] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.145 [2024-04-24 20:57:44.722420] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.145 [2024-04-24 20:57:44.722438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.145 qpair failed and we were unable to recover it. 00:26:20.145 [2024-04-24 20:57:44.732333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.145 [2024-04-24 20:57:44.732405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.145 [2024-04-24 20:57:44.732427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.145 [2024-04-24 20:57:44.732435] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.145 [2024-04-24 20:57:44.732443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.145 [2024-04-24 20:57:44.732461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.145 qpair failed and we were unable to recover it. 00:26:20.145 [2024-04-24 20:57:44.742360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.145 [2024-04-24 20:57:44.742428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.145 [2024-04-24 20:57:44.742448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.145 [2024-04-24 20:57:44.742457] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.145 [2024-04-24 20:57:44.742464] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.145 [2024-04-24 20:57:44.742480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.145 qpair failed and we were unable to recover it. 00:26:20.145 [2024-04-24 20:57:44.752292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.145 [2024-04-24 20:57:44.752375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.145 [2024-04-24 20:57:44.752396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.145 [2024-04-24 20:57:44.752404] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.145 [2024-04-24 20:57:44.752412] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.145 [2024-04-24 20:57:44.752429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.145 qpair failed and we were unable to recover it. 00:26:20.145 [2024-04-24 20:57:44.762414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.145 [2024-04-24 20:57:44.762479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.145 [2024-04-24 20:57:44.762505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.145 [2024-04-24 20:57:44.762514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.145 [2024-04-24 20:57:44.762521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.145 [2024-04-24 20:57:44.762541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.145 qpair failed and we were unable to recover it. 00:26:20.145 [2024-04-24 20:57:44.772428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.145 [2024-04-24 20:57:44.772508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.145 [2024-04-24 20:57:44.772530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.145 [2024-04-24 20:57:44.772539] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.145 [2024-04-24 20:57:44.772546] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.145 [2024-04-24 20:57:44.772563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.145 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.782448] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.782518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.782540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.410 [2024-04-24 20:57:44.782548] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.410 [2024-04-24 20:57:44.782555] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.410 [2024-04-24 20:57:44.782572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.410 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.792540] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.792616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.792637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.410 [2024-04-24 20:57:44.792645] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.410 [2024-04-24 20:57:44.792652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.410 [2024-04-24 20:57:44.792671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.410 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.802536] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.802595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.802615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.410 [2024-04-24 20:57:44.802624] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.410 [2024-04-24 20:57:44.802630] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.410 [2024-04-24 20:57:44.802647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.410 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.812562] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.812633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.812659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.410 [2024-04-24 20:57:44.812667] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.410 [2024-04-24 20:57:44.812674] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.410 [2024-04-24 20:57:44.812692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.410 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.822625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.822688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.822709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.410 [2024-04-24 20:57:44.822717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.410 [2024-04-24 20:57:44.822731] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.410 [2024-04-24 20:57:44.822749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.410 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.832681] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.832771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.832792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.410 [2024-04-24 20:57:44.832801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.410 [2024-04-24 20:57:44.832809] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.410 [2024-04-24 20:57:44.832826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.410 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.842692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.842771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.842792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.410 [2024-04-24 20:57:44.842799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.410 [2024-04-24 20:57:44.842808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.410 [2024-04-24 20:57:44.842825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.410 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.852579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.852650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.852671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.410 [2024-04-24 20:57:44.852679] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.410 [2024-04-24 20:57:44.852686] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.410 [2024-04-24 20:57:44.852709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.410 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.862749] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.862820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.862841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.410 [2024-04-24 20:57:44.862848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.410 [2024-04-24 20:57:44.862856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.410 [2024-04-24 20:57:44.862875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.410 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.872662] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.872790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.872812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.410 [2024-04-24 20:57:44.872820] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.410 [2024-04-24 20:57:44.872827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.410 [2024-04-24 20:57:44.872844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.410 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.882783] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.882895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.882916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.410 [2024-04-24 20:57:44.882924] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.410 [2024-04-24 20:57:44.882931] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.410 [2024-04-24 20:57:44.882949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.410 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.892713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.892784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.892805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.410 [2024-04-24 20:57:44.892813] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.410 [2024-04-24 20:57:44.892820] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.410 [2024-04-24 20:57:44.892838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.410 qpair failed and we were unable to recover it. 00:26:20.410 [2024-04-24 20:57:44.902848] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.410 [2024-04-24 20:57:44.902917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.410 [2024-04-24 20:57:44.902942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:44.902950] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:44.902957] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:44.902974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:44.912904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:44.912991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:44.913011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:44.913019] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:44.913029] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:44.913047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:44.922901] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:44.922978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:44.922998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:44.923006] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:44.923014] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:44.923032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:44.932929] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:44.932995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:44.933016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:44.933024] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:44.933031] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:44.933050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:44.942988] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:44.943064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:44.943084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:44.943092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:44.943106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:44.943123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:44.952999] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:44.953074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:44.953094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:44.953103] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:44.953111] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:44.953129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:44.962990] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:44.963101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:44.963156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:44.963164] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:44.963171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:44.963198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:44.972954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:44.973023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:44.973044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:44.973052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:44.973059] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:44.973077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:44.983159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:44.983258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:44.983280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:44.983292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:44.983300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:44.983319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:44.993151] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:44.993241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:44.993262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:44.993270] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:44.993278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:44.993297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:45.003128] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:45.003190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:45.003212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:45.003220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:45.003227] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:45.003244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:45.013204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:45.013316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:45.013337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:45.013346] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:45.013353] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:45.013370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:45.023290] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:45.023353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:45.023374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:45.023382] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:45.023389] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.411 [2024-04-24 20:57:45.023406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.411 qpair failed and we were unable to recover it. 00:26:20.411 [2024-04-24 20:57:45.033142] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.411 [2024-04-24 20:57:45.033217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.411 [2024-04-24 20:57:45.033237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.411 [2024-04-24 20:57:45.033246] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.411 [2024-04-24 20:57:45.033258] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.412 [2024-04-24 20:57:45.033277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.412 qpair failed and we were unable to recover it. 00:26:20.412 [2024-04-24 20:57:45.043274] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.412 [2024-04-24 20:57:45.043345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.412 [2024-04-24 20:57:45.043365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.412 [2024-04-24 20:57:45.043374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.412 [2024-04-24 20:57:45.043381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.412 [2024-04-24 20:57:45.043399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.412 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.053304] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.053374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.053395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.053403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.053410] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.053429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.063397] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.063509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.063533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.063545] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.063552] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.063570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.073419] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.073502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.073525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.073533] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.073542] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.073559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.083515] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.083588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.083609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.083617] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.083625] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.083642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.093428] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.093490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.093512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.093520] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.093528] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.093544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.103453] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.103521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.103542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.103551] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.103558] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.103576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.113526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.113614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.113635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.113643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.113652] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.113669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.123492] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.123561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.123581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.123595] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.123603] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.123620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.133550] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.133670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.133692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.133700] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.133706] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.133724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.143592] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.143656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.143676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.143684] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.143691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.143707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.153657] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.153745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.153766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.153774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.153782] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.153802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.163680] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.163756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.163776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.163784] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.163793] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.163809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.173666] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.173750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.173770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.173780] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.173787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.173804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.183700] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.183770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.183792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.183800] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.183807] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.183826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.193747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.193833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.193853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.193860] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.675 [2024-04-24 20:57:45.193868] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.675 [2024-04-24 20:57:45.193885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.675 qpair failed and we were unable to recover it. 00:26:20.675 [2024-04-24 20:57:45.203654] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.675 [2024-04-24 20:57:45.203716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.675 [2024-04-24 20:57:45.203744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.675 [2024-04-24 20:57:45.203752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.676 [2024-04-24 20:57:45.203759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.676 [2024-04-24 20:57:45.203778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-04-24 20:57:45.213796] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.676 [2024-04-24 20:57:45.213873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.676 [2024-04-24 20:57:45.213903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.676 [2024-04-24 20:57:45.213912] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.676 [2024-04-24 20:57:45.213919] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.676 [2024-04-24 20:57:45.213936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-04-24 20:57:45.223864] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.676 [2024-04-24 20:57:45.223930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.676 [2024-04-24 20:57:45.223951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.676 [2024-04-24 20:57:45.223959] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.676 [2024-04-24 20:57:45.223966] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.676 [2024-04-24 20:57:45.223983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-04-24 20:57:45.233892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.676 [2024-04-24 20:57:45.233970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.676 [2024-04-24 20:57:45.233990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.676 [2024-04-24 20:57:45.233998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.676 [2024-04-24 20:57:45.234005] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.676 [2024-04-24 20:57:45.234022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-04-24 20:57:45.243921] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.676 [2024-04-24 20:57:45.243984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.676 [2024-04-24 20:57:45.244004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.676 [2024-04-24 20:57:45.244012] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.676 [2024-04-24 20:57:45.244019] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.676 [2024-04-24 20:57:45.244036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-04-24 20:57:45.254000] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.676 [2024-04-24 20:57:45.254071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.676 [2024-04-24 20:57:45.254092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.676 [2024-04-24 20:57:45.254100] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.676 [2024-04-24 20:57:45.254106] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.676 [2024-04-24 20:57:45.254128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-04-24 20:57:45.263999] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.676 [2024-04-24 20:57:45.264067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.676 [2024-04-24 20:57:45.264087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.676 [2024-04-24 20:57:45.264095] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.676 [2024-04-24 20:57:45.264102] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.676 [2024-04-24 20:57:45.264118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-04-24 20:57:45.273908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.676 [2024-04-24 20:57:45.273983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.676 [2024-04-24 20:57:45.274002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.676 [2024-04-24 20:57:45.274010] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.676 [2024-04-24 20:57:45.274017] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.676 [2024-04-24 20:57:45.274032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-04-24 20:57:45.284038] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.676 [2024-04-24 20:57:45.284099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.676 [2024-04-24 20:57:45.284118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.676 [2024-04-24 20:57:45.284126] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.676 [2024-04-24 20:57:45.284132] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.676 [2024-04-24 20:57:45.284148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-04-24 20:57:45.294050] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.676 [2024-04-24 20:57:45.294114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.676 [2024-04-24 20:57:45.294132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.676 [2024-04-24 20:57:45.294140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.676 [2024-04-24 20:57:45.294146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.676 [2024-04-24 20:57:45.294162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.676 [2024-04-24 20:57:45.304113] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.676 [2024-04-24 20:57:45.304180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.676 [2024-04-24 20:57:45.304203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.676 [2024-04-24 20:57:45.304211] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.676 [2024-04-24 20:57:45.304218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.676 [2024-04-24 20:57:45.304235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.676 qpair failed and we were unable to recover it. 00:26:20.939 [2024-04-24 20:57:45.314124] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.939 [2024-04-24 20:57:45.314201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.314220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.314228] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.314234] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.314250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.324106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.324165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.324183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.324191] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.324197] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.324213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.334165] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.334246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.334264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.334272] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.334280] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.334295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.344204] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.344270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.344287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.344295] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.344301] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.344320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.354217] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.354279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.354296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.354303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.354310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.354325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.364253] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.364311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.364327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.364335] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.364341] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.364356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.374283] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.374343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.374360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.374367] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.374373] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.374388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.384330] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.384401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.384417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.384424] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.384430] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.384446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.394314] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.394378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.394394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.394401] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.394407] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.394422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.404251] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.404309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.404325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.404332] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.404339] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.404354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.414360] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.414419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.414434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.414442] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.414448] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.414462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.424452] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.424532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.424548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.424555] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.424561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.424576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.434353] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.434443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.434470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.940 [2024-04-24 20:57:45.434479] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.940 [2024-04-24 20:57:45.434491] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.940 [2024-04-24 20:57:45.434510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.940 qpair failed and we were unable to recover it. 00:26:20.940 [2024-04-24 20:57:45.444488] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.940 [2024-04-24 20:57:45.444555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.940 [2024-04-24 20:57:45.444572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.444579] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.444586] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.444602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.454514] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.454576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.454601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.454610] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.454619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.454638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.464531] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.464591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.464607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.464615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.464621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.464636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.474533] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.474591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.474606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.474613] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.474619] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.474633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.484575] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.484630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.484645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.484652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.484659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.484673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.494658] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.494719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.494749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.494756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.494763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.494778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.504521] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.504580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.504595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.504602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.504609] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.504623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.514685] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.514769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.514784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.514791] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.514798] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.514812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.524688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.524773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.524787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.524799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.524805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.524820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.534845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.534941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.534955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.534962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.534969] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.534983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.544813] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.544868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.544882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.544889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.544895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.544909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.554827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.554886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.554900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.554907] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.554913] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.554928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.564870] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.564934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.564948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.564955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.564962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.564976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:20.941 [2024-04-24 20:57:45.574827] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:20.941 [2024-04-24 20:57:45.574880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:20.941 [2024-04-24 20:57:45.574894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:20.941 [2024-04-24 20:57:45.574901] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:20.941 [2024-04-24 20:57:45.574907] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:20.941 [2024-04-24 20:57:45.574921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:20.941 qpair failed and we were unable to recover it. 00:26:21.204 [2024-04-24 20:57:45.584871] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.204 [2024-04-24 20:57:45.584926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.204 [2024-04-24 20:57:45.584940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.204 [2024-04-24 20:57:45.584947] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.204 [2024-04-24 20:57:45.584953] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.204 [2024-04-24 20:57:45.584967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.204 qpair failed and we were unable to recover it. 00:26:21.204 [2024-04-24 20:57:45.594875] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.594933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.594947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.594954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.594960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.594975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.604920] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.604971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.604985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.604992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.604999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.605012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.614970] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.615062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.615080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.615087] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.615093] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.615107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.625016] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.625090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.625104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.625111] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.625118] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.625132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.635019] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.635081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.635095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.635102] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.635108] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.635122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.645061] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.645116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.645130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.645137] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.645143] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.645156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.655047] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.655099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.655113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.655120] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.655126] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.655143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.665097] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.665155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.665169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.665176] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.665182] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.665197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.675018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.675084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.675100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.675107] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.675114] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.675128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.685193] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.685282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.685298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.685305] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.685311] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.685325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.695187] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.695240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.695255] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.695262] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.695269] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.695283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.705219] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.705275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.705294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.705301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.705307] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.705321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.715309] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.715414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.715429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.715436] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.205 [2024-04-24 20:57:45.715443] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.205 [2024-04-24 20:57:45.715457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.205 qpair failed and we were unable to recover it. 00:26:21.205 [2024-04-24 20:57:45.725245] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.205 [2024-04-24 20:57:45.725300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.205 [2024-04-24 20:57:45.725314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.205 [2024-04-24 20:57:45.725321] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.206 [2024-04-24 20:57:45.725327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.206 [2024-04-24 20:57:45.725341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.206 qpair failed and we were unable to recover it. 00:26:21.206 [2024-04-24 20:57:45.735302] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.206 [2024-04-24 20:57:45.735357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.206 [2024-04-24 20:57:45.735372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.206 [2024-04-24 20:57:45.735380] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.206 [2024-04-24 20:57:45.735386] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.206 [2024-04-24 20:57:45.735400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.206 qpair failed and we were unable to recover it. 00:26:21.206 [2024-04-24 20:57:45.745340] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.206 [2024-04-24 20:57:45.745396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.206 [2024-04-24 20:57:45.745411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.206 [2024-04-24 20:57:45.745418] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.206 [2024-04-24 20:57:45.745424] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.206 [2024-04-24 20:57:45.745441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.206 qpair failed and we were unable to recover it. 00:26:21.206 [2024-04-24 20:57:45.755382] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.206 [2024-04-24 20:57:45.755481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.206 [2024-04-24 20:57:45.755497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.206 [2024-04-24 20:57:45.755503] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.206 [2024-04-24 20:57:45.755510] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.206 [2024-04-24 20:57:45.755524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.206 qpair failed and we were unable to recover it. 00:26:21.206 [2024-04-24 20:57:45.765385] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.206 [2024-04-24 20:57:45.765444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.206 [2024-04-24 20:57:45.765470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.206 [2024-04-24 20:57:45.765478] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.206 [2024-04-24 20:57:45.765485] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.206 [2024-04-24 20:57:45.765505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.206 qpair failed and we were unable to recover it. 00:26:21.206 [2024-04-24 20:57:45.775480] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.206 [2024-04-24 20:57:45.775541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.206 [2024-04-24 20:57:45.775566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.206 [2024-04-24 20:57:45.775575] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.206 [2024-04-24 20:57:45.775582] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.206 [2024-04-24 20:57:45.775601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.206 qpair failed and we were unable to recover it. 00:26:21.206 [2024-04-24 20:57:45.785437] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.206 [2024-04-24 20:57:45.785506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.206 [2024-04-24 20:57:45.785523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.206 [2024-04-24 20:57:45.785530] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.206 [2024-04-24 20:57:45.785536] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.206 [2024-04-24 20:57:45.785552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.206 qpair failed and we were unable to recover it. 00:26:21.206 [2024-04-24 20:57:45.795486] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.206 [2024-04-24 20:57:45.795550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.206 [2024-04-24 20:57:45.795565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.206 [2024-04-24 20:57:45.795573] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.206 [2024-04-24 20:57:45.795579] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.206 [2024-04-24 20:57:45.795593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.206 qpair failed and we were unable to recover it. 00:26:21.206 [2024-04-24 20:57:45.805494] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.206 [2024-04-24 20:57:45.805556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.206 [2024-04-24 20:57:45.805571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.206 [2024-04-24 20:57:45.805578] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.206 [2024-04-24 20:57:45.805584] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.206 [2024-04-24 20:57:45.805598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.206 qpair failed and we were unable to recover it. 00:26:21.206 [2024-04-24 20:57:45.815407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.206 [2024-04-24 20:57:45.815460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.206 [2024-04-24 20:57:45.815475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.206 [2024-04-24 20:57:45.815482] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.206 [2024-04-24 20:57:45.815488] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.206 [2024-04-24 20:57:45.815502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.206 qpair failed and we were unable to recover it. 00:26:21.206 [2024-04-24 20:57:45.825570] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.206 [2024-04-24 20:57:45.825625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.206 [2024-04-24 20:57:45.825639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.206 [2024-04-24 20:57:45.825647] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.206 [2024-04-24 20:57:45.825653] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.206 [2024-04-24 20:57:45.825668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.206 qpair failed and we were unable to recover it. 00:26:21.206 [2024-04-24 20:57:45.835608] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.206 [2024-04-24 20:57:45.835669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.206 [2024-04-24 20:57:45.835684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.206 [2024-04-24 20:57:45.835691] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.206 [2024-04-24 20:57:45.835703] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.206 [2024-04-24 20:57:45.835719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.206 qpair failed and we were unable to recover it. 00:26:21.469 [2024-04-24 20:57:45.845579] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.469 [2024-04-24 20:57:45.845638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.469 [2024-04-24 20:57:45.845652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.469 [2024-04-24 20:57:45.845660] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.469 [2024-04-24 20:57:45.845666] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.469 [2024-04-24 20:57:45.845680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.469 qpair failed and we were unable to recover it. 00:26:21.469 [2024-04-24 20:57:45.855653] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.469 [2024-04-24 20:57:45.855711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.469 [2024-04-24 20:57:45.855731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.469 [2024-04-24 20:57:45.855739] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.469 [2024-04-24 20:57:45.855745] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.469 [2024-04-24 20:57:45.855760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.469 qpair failed and we were unable to recover it. 00:26:21.469 [2024-04-24 20:57:45.865566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.469 [2024-04-24 20:57:45.865619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.469 [2024-04-24 20:57:45.865634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.469 [2024-04-24 20:57:45.865641] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.469 [2024-04-24 20:57:45.865647] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.469 [2024-04-24 20:57:45.865662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.469 qpair failed and we were unable to recover it. 00:26:21.469 [2024-04-24 20:57:45.875754] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.469 [2024-04-24 20:57:45.875852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.469 [2024-04-24 20:57:45.875866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.469 [2024-04-24 20:57:45.875873] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.469 [2024-04-24 20:57:45.875880] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.469 [2024-04-24 20:57:45.875894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.469 qpair failed and we were unable to recover it. 00:26:21.469 [2024-04-24 20:57:45.885604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.469 [2024-04-24 20:57:45.885663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.469 [2024-04-24 20:57:45.885677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.469 [2024-04-24 20:57:45.885685] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.469 [2024-04-24 20:57:45.885691] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.469 [2024-04-24 20:57:45.885705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.469 qpair failed and we were unable to recover it. 00:26:21.469 [2024-04-24 20:57:45.895743] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.469 [2024-04-24 20:57:45.895800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.469 [2024-04-24 20:57:45.895814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.469 [2024-04-24 20:57:45.895821] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.469 [2024-04-24 20:57:45.895827] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.469 [2024-04-24 20:57:45.895841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.469 qpair failed and we were unable to recover it. 00:26:21.469 [2024-04-24 20:57:45.905650] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.469 [2024-04-24 20:57:45.905708] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.469 [2024-04-24 20:57:45.905722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.469 [2024-04-24 20:57:45.905734] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.469 [2024-04-24 20:57:45.905740] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.469 [2024-04-24 20:57:45.905754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.469 qpair failed and we were unable to recover it. 00:26:21.469 [2024-04-24 20:57:45.915841] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.469 [2024-04-24 20:57:45.915906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.469 [2024-04-24 20:57:45.915921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.469 [2024-04-24 20:57:45.915928] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.469 [2024-04-24 20:57:45.915934] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.469 [2024-04-24 20:57:45.915948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.469 qpair failed and we were unable to recover it. 00:26:21.469 [2024-04-24 20:57:45.925814] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.469 [2024-04-24 20:57:45.925868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:45.925882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:45.925894] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:45.925901] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:45.925915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:45.935840] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:45.935897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:45.935912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:45.935919] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:45.935925] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:45.935939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:45.945872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:45.945933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:45.945947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:45.945955] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:45.945961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:45.945975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:45.955884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:45.955970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:45.955985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:45.955993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:45.955999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:45.956014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:45.965913] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:45.965969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:45.965984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:45.965991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:45.965998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:45.966011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:45.975862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:45.975914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:45.975929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:45.975936] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:45.975942] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:45.975956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:45.985863] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:45.985920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:45.985934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:45.985942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:45.985948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:45.985962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:45.995992] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:45.996058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:45.996073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:45.996080] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:45.996087] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:45.996101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:46.006041] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:46.006096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:46.006110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:46.006118] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:46.006124] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:46.006137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:46.016068] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:46.016125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:46.016139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:46.016153] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:46.016160] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:46.016174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:46.025979] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:46.026037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:46.026053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:46.026060] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:46.026067] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:46.026082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:46.036121] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:46.036181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:46.036196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:46.036203] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:46.036210] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:46.036224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:46.046159] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:46.046213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:46.046228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:46.046235] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:46.046241] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.470 [2024-04-24 20:57:46.046255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.470 qpair failed and we were unable to recover it. 00:26:21.470 [2024-04-24 20:57:46.056179] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.470 [2024-04-24 20:57:46.056234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.470 [2024-04-24 20:57:46.056248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.470 [2024-04-24 20:57:46.056256] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.470 [2024-04-24 20:57:46.056262] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.471 [2024-04-24 20:57:46.056275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.471 qpair failed and we were unable to recover it. 00:26:21.471 [2024-04-24 20:57:46.066211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.471 [2024-04-24 20:57:46.066269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.471 [2024-04-24 20:57:46.066284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.471 [2024-04-24 20:57:46.066291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.471 [2024-04-24 20:57:46.066297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.471 [2024-04-24 20:57:46.066311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.471 qpair failed and we were unable to recover it. 00:26:21.471 [2024-04-24 20:57:46.076225] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.471 [2024-04-24 20:57:46.076281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.471 [2024-04-24 20:57:46.076296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.471 [2024-04-24 20:57:46.076303] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.471 [2024-04-24 20:57:46.076309] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.471 [2024-04-24 20:57:46.076323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.471 qpair failed and we were unable to recover it. 00:26:21.471 [2024-04-24 20:57:46.086318] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.471 [2024-04-24 20:57:46.086374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.471 [2024-04-24 20:57:46.086389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.471 [2024-04-24 20:57:46.086396] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.471 [2024-04-24 20:57:46.086402] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.471 [2024-04-24 20:57:46.086416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.471 qpair failed and we were unable to recover it. 00:26:21.471 [2024-04-24 20:57:46.096293] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.471 [2024-04-24 20:57:46.096349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.471 [2024-04-24 20:57:46.096363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.471 [2024-04-24 20:57:46.096371] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.471 [2024-04-24 20:57:46.096377] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.471 [2024-04-24 20:57:46.096391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.471 qpair failed and we were unable to recover it. 00:26:21.471 [2024-04-24 20:57:46.106406] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.471 [2024-04-24 20:57:46.106463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.471 [2024-04-24 20:57:46.106481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.471 [2024-04-24 20:57:46.106488] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.471 [2024-04-24 20:57:46.106495] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.471 [2024-04-24 20:57:46.106509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.471 qpair failed and we were unable to recover it. 00:26:21.734 [2024-04-24 20:57:46.116333] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.734 [2024-04-24 20:57:46.116394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.734 [2024-04-24 20:57:46.116409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.734 [2024-04-24 20:57:46.116416] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.734 [2024-04-24 20:57:46.116423] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.734 [2024-04-24 20:57:46.116438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.734 qpair failed and we were unable to recover it. 00:26:21.734 [2024-04-24 20:57:46.126361] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.734 [2024-04-24 20:57:46.126415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.734 [2024-04-24 20:57:46.126430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.734 [2024-04-24 20:57:46.126437] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.734 [2024-04-24 20:57:46.126444] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.734 [2024-04-24 20:57:46.126457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.734 qpair failed and we were unable to recover it. 00:26:21.734 [2024-04-24 20:57:46.136395] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.734 [2024-04-24 20:57:46.136450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.734 [2024-04-24 20:57:46.136465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.734 [2024-04-24 20:57:46.136472] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.734 [2024-04-24 20:57:46.136478] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.734 [2024-04-24 20:57:46.136492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.734 qpair failed and we were unable to recover it. 00:26:21.734 [2024-04-24 20:57:46.146432] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.734 [2024-04-24 20:57:46.146490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.734 [2024-04-24 20:57:46.146504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.734 [2024-04-24 20:57:46.146511] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.734 [2024-04-24 20:57:46.146518] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.734 [2024-04-24 20:57:46.146535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.734 qpair failed and we were unable to recover it. 00:26:21.734 [2024-04-24 20:57:46.156324] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.734 [2024-04-24 20:57:46.156381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.734 [2024-04-24 20:57:46.156396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.734 [2024-04-24 20:57:46.156403] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.734 [2024-04-24 20:57:46.156409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.734 [2024-04-24 20:57:46.156423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.734 qpair failed and we were unable to recover it. 00:26:21.734 [2024-04-24 20:57:46.166471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.734 [2024-04-24 20:57:46.166540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.734 [2024-04-24 20:57:46.166554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.734 [2024-04-24 20:57:46.166561] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.734 [2024-04-24 20:57:46.166568] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.734 [2024-04-24 20:57:46.166582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.734 qpair failed and we were unable to recover it. 00:26:21.734 [2024-04-24 20:57:46.176498] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.734 [2024-04-24 20:57:46.176568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.734 [2024-04-24 20:57:46.176582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.734 [2024-04-24 20:57:46.176589] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.734 [2024-04-24 20:57:46.176595] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.734 [2024-04-24 20:57:46.176610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.734 qpair failed and we were unable to recover it. 00:26:21.734 [2024-04-24 20:57:46.186535] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.734 [2024-04-24 20:57:46.186593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.734 [2024-04-24 20:57:46.186607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.734 [2024-04-24 20:57:46.186614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.734 [2024-04-24 20:57:46.186620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.734 [2024-04-24 20:57:46.186635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.734 qpair failed and we were unable to recover it. 00:26:21.734 [2024-04-24 20:57:46.196557] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.734 [2024-04-24 20:57:46.196637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.734 [2024-04-24 20:57:46.196655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.734 [2024-04-24 20:57:46.196662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.734 [2024-04-24 20:57:46.196669] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.734 [2024-04-24 20:57:46.196682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.734 qpair failed and we were unable to recover it. 00:26:21.734 [2024-04-24 20:57:46.206588] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.734 [2024-04-24 20:57:46.206645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.734 [2024-04-24 20:57:46.206660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.734 [2024-04-24 20:57:46.206669] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.206675] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.206689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.216616] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.216671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.216686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.216693] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.216699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.216712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.226649] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.226706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.226721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.226733] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.226739] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.226753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.236688] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.236755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.236770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.236777] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.236787] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.236801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.246684] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.246742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.246757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.246764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.246771] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.246785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.256636] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.256735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.256749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.256756] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.256764] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.256778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.266645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.266700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.266714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.266721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.266732] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.266747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.276791] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.276854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.276869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.276876] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.276882] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.276896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.286803] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.286867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.286882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.286889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.286895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.286909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.296824] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.296926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.296941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.296948] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.296954] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.296969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.306868] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.306920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.306934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.306942] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.306948] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.306962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.316895] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.316959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.316974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.316981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.316987] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.317001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.326914] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.326968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.326983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.326993] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.326999] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.327013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.735 [2024-04-24 20:57:46.336923] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.735 [2024-04-24 20:57:46.336975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.735 [2024-04-24 20:57:46.336991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.735 [2024-04-24 20:57:46.336998] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.735 [2024-04-24 20:57:46.337004] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.735 [2024-04-24 20:57:46.337018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.735 qpair failed and we were unable to recover it. 00:26:21.736 [2024-04-24 20:57:46.346978] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.736 [2024-04-24 20:57:46.347037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.736 [2024-04-24 20:57:46.347051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.736 [2024-04-24 20:57:46.347058] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.736 [2024-04-24 20:57:46.347064] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.736 [2024-04-24 20:57:46.347077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.736 qpair failed and we were unable to recover it. 00:26:21.736 [2024-04-24 20:57:46.357011] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.736 [2024-04-24 20:57:46.357069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.736 [2024-04-24 20:57:46.357083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.736 [2024-04-24 20:57:46.357090] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.736 [2024-04-24 20:57:46.357096] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.736 [2024-04-24 20:57:46.357110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.736 qpair failed and we were unable to recover it. 00:26:21.736 [2024-04-24 20:57:46.367064] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.736 [2024-04-24 20:57:46.367141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.736 [2024-04-24 20:57:46.367155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.736 [2024-04-24 20:57:46.367163] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.736 [2024-04-24 20:57:46.367170] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.736 [2024-04-24 20:57:46.367183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.736 qpair failed and we were unable to recover it. 00:26:21.998 [2024-04-24 20:57:46.377043] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.998 [2024-04-24 20:57:46.377095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.998 [2024-04-24 20:57:46.377109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.998 [2024-04-24 20:57:46.377116] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.998 [2024-04-24 20:57:46.377122] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.998 [2024-04-24 20:57:46.377136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.998 qpair failed and we were unable to recover it. 00:26:21.998 [2024-04-24 20:57:46.387093] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.998 [2024-04-24 20:57:46.387152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.998 [2024-04-24 20:57:46.387166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.998 [2024-04-24 20:57:46.387173] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.998 [2024-04-24 20:57:46.387179] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.998 [2024-04-24 20:57:46.387193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.998 qpair failed and we were unable to recover it. 00:26:21.998 [2024-04-24 20:57:46.397111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.998 [2024-04-24 20:57:46.397172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.998 [2024-04-24 20:57:46.397187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.998 [2024-04-24 20:57:46.397194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.998 [2024-04-24 20:57:46.397200] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.998 [2024-04-24 20:57:46.397214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.998 qpair failed and we were unable to recover it. 00:26:21.998 [2024-04-24 20:57:46.407141] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.998 [2024-04-24 20:57:46.407210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.998 [2024-04-24 20:57:46.407224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.998 [2024-04-24 20:57:46.407231] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.998 [2024-04-24 20:57:46.407237] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.998 [2024-04-24 20:57:46.407252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.998 qpair failed and we were unable to recover it. 00:26:21.998 [2024-04-24 20:57:46.417169] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.998 [2024-04-24 20:57:46.417220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.998 [2024-04-24 20:57:46.417235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.998 [2024-04-24 20:57:46.417245] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.998 [2024-04-24 20:57:46.417252] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.998 [2024-04-24 20:57:46.417266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.998 qpair failed and we were unable to recover it. 00:26:21.998 [2024-04-24 20:57:46.427199] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.998 [2024-04-24 20:57:46.427255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.998 [2024-04-24 20:57:46.427270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.998 [2024-04-24 20:57:46.427277] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.998 [2024-04-24 20:57:46.427283] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.998 [2024-04-24 20:57:46.427297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.998 qpair failed and we were unable to recover it. 00:26:21.998 [2024-04-24 20:57:46.437239] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.998 [2024-04-24 20:57:46.437307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.998 [2024-04-24 20:57:46.437321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.998 [2024-04-24 20:57:46.437329] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.998 [2024-04-24 20:57:46.437335] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.998 [2024-04-24 20:57:46.437349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.998 qpair failed and we were unable to recover it. 00:26:21.998 [2024-04-24 20:57:46.447236] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.998 [2024-04-24 20:57:46.447291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.998 [2024-04-24 20:57:46.447306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.998 [2024-04-24 20:57:46.447313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.998 [2024-04-24 20:57:46.447319] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.998 [2024-04-24 20:57:46.447332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.998 qpair failed and we were unable to recover it. 00:26:21.998 [2024-04-24 20:57:46.457252] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.998 [2024-04-24 20:57:46.457318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.998 [2024-04-24 20:57:46.457332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.998 [2024-04-24 20:57:46.457339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.998 [2024-04-24 20:57:46.457345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.998 [2024-04-24 20:57:46.457359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.998 qpair failed and we were unable to recover it. 00:26:21.998 [2024-04-24 20:57:46.467260] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.998 [2024-04-24 20:57:46.467316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.998 [2024-04-24 20:57:46.467330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.998 [2024-04-24 20:57:46.467337] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.998 [2024-04-24 20:57:46.467343] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.998 [2024-04-24 20:57:46.467357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.998 qpair failed and we were unable to recover it. 00:26:21.998 [2024-04-24 20:57:46.477316] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.998 [2024-04-24 20:57:46.477381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.998 [2024-04-24 20:57:46.477395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.998 [2024-04-24 20:57:46.477402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.998 [2024-04-24 20:57:46.477409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.998 [2024-04-24 20:57:46.477422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.998 qpair failed and we were unable to recover it. 00:26:21.998 [2024-04-24 20:57:46.487348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.487400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.487414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.487421] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.487427] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.487441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.497369] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.497430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.497455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.497464] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.497471] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.497490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.507407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.507472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.507500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.507509] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.507516] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.507535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.517429] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.517541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.517567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.517576] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.517583] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.517602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.527440] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.527501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.527517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.527524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.527531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.527545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.537534] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.537591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.537607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.537614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.537620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.537637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.547398] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.547453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.547468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.547475] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.547481] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.547500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.557516] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.557580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.557595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.557602] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.557608] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.557622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.567584] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.567636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.567650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.567657] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.567664] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.567677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.577610] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.577660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.577674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.577681] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.577688] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.577702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.587692] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.587753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.587768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.587775] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.587781] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.587795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.597671] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.597739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.597757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.597764] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.597770] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.597784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.607572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.607630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.607645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:21.999 [2024-04-24 20:57:46.607652] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:21.999 [2024-04-24 20:57:46.607659] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:21.999 [2024-04-24 20:57:46.607673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:21.999 qpair failed and we were unable to recover it. 00:26:21.999 [2024-04-24 20:57:46.617703] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:21.999 [2024-04-24 20:57:46.617763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:21.999 [2024-04-24 20:57:46.617778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.000 [2024-04-24 20:57:46.617785] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.000 [2024-04-24 20:57:46.617791] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.000 [2024-04-24 20:57:46.617805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.000 qpair failed and we were unable to recover it. 00:26:22.000 [2024-04-24 20:57:46.627750] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.000 [2024-04-24 20:57:46.627808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.000 [2024-04-24 20:57:46.627822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.000 [2024-04-24 20:57:46.627829] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.000 [2024-04-24 20:57:46.627836] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.000 [2024-04-24 20:57:46.627850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.000 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.637747] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.637826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.637840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.637848] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.262 [2024-04-24 20:57:46.637858] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.262 [2024-04-24 20:57:46.637872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.262 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.647775] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.647845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.647860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.647867] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.262 [2024-04-24 20:57:46.647873] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.262 [2024-04-24 20:57:46.647887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.262 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.657826] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.657883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.657897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.657904] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.262 [2024-04-24 20:57:46.657910] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.262 [2024-04-24 20:57:46.657924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.262 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.667878] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.667933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.667947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.667954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.262 [2024-04-24 20:57:46.667961] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.262 [2024-04-24 20:57:46.667975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.262 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.677874] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.677937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.677951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.677958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.262 [2024-04-24 20:57:46.677965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.262 [2024-04-24 20:57:46.677980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.262 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.687872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.687939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.687954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.687961] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.262 [2024-04-24 20:57:46.687967] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.262 [2024-04-24 20:57:46.687981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.262 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.697963] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.698047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.698062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.698069] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.262 [2024-04-24 20:57:46.698075] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.262 [2024-04-24 20:57:46.698089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.262 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.707969] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.708023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.708038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.708045] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.262 [2024-04-24 20:57:46.708051] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.262 [2024-04-24 20:57:46.708064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.262 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.717998] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.718071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.718085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.718092] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.262 [2024-04-24 20:57:46.718098] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.262 [2024-04-24 20:57:46.718112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.262 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.727904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.727960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.727974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.727981] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.262 [2024-04-24 20:57:46.727990] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.262 [2024-04-24 20:57:46.728004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.262 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.738101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.738153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.738167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.738174] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.262 [2024-04-24 20:57:46.738180] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.262 [2024-04-24 20:57:46.738194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.262 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.748088] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.748144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.748158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.748165] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.262 [2024-04-24 20:57:46.748171] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.262 [2024-04-24 20:57:46.748185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.262 qpair failed and we were unable to recover it. 00:26:22.262 [2024-04-24 20:57:46.758089] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.262 [2024-04-24 20:57:46.758150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.262 [2024-04-24 20:57:46.758164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.262 [2024-04-24 20:57:46.758171] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.758178] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.758191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.768145] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.768198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.768212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.768219] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.768225] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.768239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.778160] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.778217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.778231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.778238] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.778244] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.778258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.788152] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.788214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.788228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.788236] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.788242] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.788256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.798227] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.798285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.798299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.798306] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.798313] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.798327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.808213] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.808268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.808282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.808289] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.808296] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.808310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.818264] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.818316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.818331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.818341] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.818348] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.818361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.828300] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.828355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.828370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.828377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.828383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.828397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.838325] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.838387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.838402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.838409] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.838415] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.838429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.848357] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.848412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.848427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.848434] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.848441] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.848456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.858344] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.858397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.858412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.858419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.858426] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.858439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.868393] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.868483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.868497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.868504] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.868511] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.868525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.878404] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.878495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.878509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.878517] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.878523] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.878537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.888463] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.263 [2024-04-24 20:57:46.888522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.263 [2024-04-24 20:57:46.888536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.263 [2024-04-24 20:57:46.888543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.263 [2024-04-24 20:57:46.888549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.263 [2024-04-24 20:57:46.888563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.263 qpair failed and we were unable to recover it. 00:26:22.263 [2024-04-24 20:57:46.898503] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.264 [2024-04-24 20:57:46.898587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.264 [2024-04-24 20:57:46.898601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.264 [2024-04-24 20:57:46.898608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.264 [2024-04-24 20:57:46.898615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.264 [2024-04-24 20:57:46.898629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.264 qpair failed and we were unable to recover it. 00:26:22.525 [2024-04-24 20:57:46.908414] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.525 [2024-04-24 20:57:46.908471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.525 [2024-04-24 20:57:46.908488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.525 [2024-04-24 20:57:46.908495] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.525 [2024-04-24 20:57:46.908501] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.525 [2024-04-24 20:57:46.908515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.525 qpair failed and we were unable to recover it. 00:26:22.525 [2024-04-24 20:57:46.918571] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.525 [2024-04-24 20:57:46.918641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.525 [2024-04-24 20:57:46.918655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.525 [2024-04-24 20:57:46.918662] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.525 [2024-04-24 20:57:46.918668] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.525 [2024-04-24 20:57:46.918682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.525 qpair failed and we were unable to recover it. 00:26:22.525 [2024-04-24 20:57:46.928590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.525 [2024-04-24 20:57:46.928644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:46.928659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:46.928666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:46.928672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:46.928686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:46.938595] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:46.938645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:46.938659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:46.938666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:46.938673] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:46.938686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:46.948642] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:46.948700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:46.948714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:46.948721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:46.948733] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:46.948751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:46.958658] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:46.958727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:46.958741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:46.958748] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:46.958754] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:46.958768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:46.968578] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:46.968635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:46.968649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:46.968656] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:46.968662] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:46.968676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:46.978690] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:46.978747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:46.978761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:46.978768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:46.978774] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:46.978788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:46.988718] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:46.988777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:46.988791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:46.988799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:46.988805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:46.988819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:46.998764] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:46.998825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:46.998843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:46.998850] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:46.998856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:46.998870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.008839] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.008917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.008931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.008938] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.008945] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.008959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.018818] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.018881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.018896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.018903] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.018909] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.018922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.028884] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.028941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.028955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.028962] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.028968] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.028982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.038867] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.038925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.038939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.038946] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.038955] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.038970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.048897] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.049002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.049016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.049023] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.049030] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.049044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.058920] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.058979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.058993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.059000] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.059007] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.059021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.068974] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.069033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.069047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.069054] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.069060] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.069073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.079015] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.079077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.079091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.079098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.079105] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.079118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.089021] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.089077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.089091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.089098] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.089104] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.089118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.099034] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.099084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.099099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.099106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.099112] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.099126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.109101] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.109159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.109174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.109181] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.109187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.109201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.119111] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.119173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.119187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.119194] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.119201] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.119214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.129008] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.129061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.129075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.129082] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.129092] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.129106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.139125] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.139199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.139213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.139220] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.139226] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.139240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.149208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.149264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.149279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.149286] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.149292] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.149305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.526 [2024-04-24 20:57:47.159213] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.526 [2024-04-24 20:57:47.159273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.526 [2024-04-24 20:57:47.159287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.526 [2024-04-24 20:57:47.159294] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.526 [2024-04-24 20:57:47.159300] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.526 [2024-04-24 20:57:47.159314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.526 qpair failed and we were unable to recover it. 00:26:22.788 [2024-04-24 20:57:47.169235] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.788 [2024-04-24 20:57:47.169292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.788 [2024-04-24 20:57:47.169306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.788 [2024-04-24 20:57:47.169313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.788 [2024-04-24 20:57:47.169321] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.788 [2024-04-24 20:57:47.169335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.788 qpair failed and we were unable to recover it. 00:26:22.788 [2024-04-24 20:57:47.179262] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.788 [2024-04-24 20:57:47.179317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.788 [2024-04-24 20:57:47.179332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.788 [2024-04-24 20:57:47.179339] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.788 [2024-04-24 20:57:47.179345] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.788 [2024-04-24 20:57:47.179359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.788 qpair failed and we were unable to recover it. 00:26:22.788 [2024-04-24 20:57:47.189303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.788 [2024-04-24 20:57:47.189359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.788 [2024-04-24 20:57:47.189374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.788 [2024-04-24 20:57:47.189381] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.788 [2024-04-24 20:57:47.189387] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.788 [2024-04-24 20:57:47.189401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.788 qpair failed and we were unable to recover it. 00:26:22.788 [2024-04-24 20:57:47.199320] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.788 [2024-04-24 20:57:47.199377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.788 [2024-04-24 20:57:47.199391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.788 [2024-04-24 20:57:47.199399] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.788 [2024-04-24 20:57:47.199405] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.788 [2024-04-24 20:57:47.199419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.788 qpair failed and we were unable to recover it. 00:26:22.788 [2024-04-24 20:57:47.209348] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.788 [2024-04-24 20:57:47.209405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.788 [2024-04-24 20:57:47.209419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.788 [2024-04-24 20:57:47.209426] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.788 [2024-04-24 20:57:47.209432] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.788 [2024-04-24 20:57:47.209446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.788 qpair failed and we were unable to recover it. 00:26:22.788 [2024-04-24 20:57:47.219374] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.788 [2024-04-24 20:57:47.219429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.788 [2024-04-24 20:57:47.219445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.788 [2024-04-24 20:57:47.219456] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.788 [2024-04-24 20:57:47.219463] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.788 [2024-04-24 20:57:47.219478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.788 qpair failed and we were unable to recover it. 00:26:22.788 [2024-04-24 20:57:47.229407] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.788 [2024-04-24 20:57:47.229470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.788 [2024-04-24 20:57:47.229485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.788 [2024-04-24 20:57:47.229492] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.788 [2024-04-24 20:57:47.229498] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.788 [2024-04-24 20:57:47.229513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.788 qpair failed and we were unable to recover it. 00:26:22.788 [2024-04-24 20:57:47.239445] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.788 [2024-04-24 20:57:47.239502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.788 [2024-04-24 20:57:47.239517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.788 [2024-04-24 20:57:47.239524] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.788 [2024-04-24 20:57:47.239530] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.788 [2024-04-24 20:57:47.239544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.788 qpair failed and we were unable to recover it. 00:26:22.788 [2024-04-24 20:57:47.249446] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.249504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.249518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.249525] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.249531] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.249545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.259471] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.259522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.259536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.259543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.259550] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.259564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.269507] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.269604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.269619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.269627] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.269633] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.269647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.279532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.279593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.279607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.279614] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.279620] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.279634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.289566] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.289658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.289673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.289680] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.289687] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.289701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.299590] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.299645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.299659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.299666] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.299672] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.299687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.309625] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.309681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.309699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.309706] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.309714] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.309734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.319645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.319703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.319717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.319729] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.319735] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.319750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.329670] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.329730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.329745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.329752] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.329759] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.329773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.339663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.339750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.339765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.339772] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.339779] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.339793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.349748] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.349811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.349825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.349832] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.349839] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.349856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.359719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.359779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.359794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.359801] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.359808] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.359821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.369783] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.369837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.369851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.369858] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.789 [2024-04-24 20:57:47.369864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.789 [2024-04-24 20:57:47.369879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.789 qpair failed and we were unable to recover it. 00:26:22.789 [2024-04-24 20:57:47.379811] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.789 [2024-04-24 20:57:47.379867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.789 [2024-04-24 20:57:47.379882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.789 [2024-04-24 20:57:47.379889] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.790 [2024-04-24 20:57:47.379895] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.790 [2024-04-24 20:57:47.379909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.790 qpair failed and we were unable to recover it. 00:26:22.790 [2024-04-24 20:57:47.389851] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.790 [2024-04-24 20:57:47.389970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.790 [2024-04-24 20:57:47.389985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.790 [2024-04-24 20:57:47.389992] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.790 [2024-04-24 20:57:47.389998] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.790 [2024-04-24 20:57:47.390012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.790 qpair failed and we were unable to recover it. 00:26:22.790 [2024-04-24 20:57:47.399904] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.790 [2024-04-24 20:57:47.399974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.790 [2024-04-24 20:57:47.399992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.790 [2024-04-24 20:57:47.399999] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.790 [2024-04-24 20:57:47.400006] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.790 [2024-04-24 20:57:47.400021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.790 qpair failed and we were unable to recover it. 00:26:22.790 [2024-04-24 20:57:47.409872] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.790 [2024-04-24 20:57:47.409965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.790 [2024-04-24 20:57:47.409981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.790 [2024-04-24 20:57:47.409989] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.790 [2024-04-24 20:57:47.410000] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.790 [2024-04-24 20:57:47.410014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.790 qpair failed and we were unable to recover it. 00:26:22.790 [2024-04-24 20:57:47.419804] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:22.790 [2024-04-24 20:57:47.419862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:22.790 [2024-04-24 20:57:47.419876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:22.790 [2024-04-24 20:57:47.419883] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:22.790 [2024-04-24 20:57:47.419890] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:22.790 [2024-04-24 20:57:47.419904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:22.790 qpair failed and we were unable to recover it. 00:26:23.052 [2024-04-24 20:57:47.429845] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.052 [2024-04-24 20:57:47.429904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.052 [2024-04-24 20:57:47.429919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.052 [2024-04-24 20:57:47.429926] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.052 [2024-04-24 20:57:47.429932] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.052 [2024-04-24 20:57:47.429946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-04-24 20:57:47.439965] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.052 [2024-04-24 20:57:47.440027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.052 [2024-04-24 20:57:47.440041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.052 [2024-04-24 20:57:47.440048] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.052 [2024-04-24 20:57:47.440055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.052 [2024-04-24 20:57:47.440072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-04-24 20:57:47.450014] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.052 [2024-04-24 20:57:47.450072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.052 [2024-04-24 20:57:47.450086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.052 [2024-04-24 20:57:47.450093] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.052 [2024-04-24 20:57:47.450099] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.052 [2024-04-24 20:57:47.450113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-04-24 20:57:47.460067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.052 [2024-04-24 20:57:47.460118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.052 [2024-04-24 20:57:47.460132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.052 [2024-04-24 20:57:47.460140] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.052 [2024-04-24 20:57:47.460146] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.052 [2024-04-24 20:57:47.460160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.052 qpair failed and we were unable to recover it. 00:26:23.052 [2024-04-24 20:57:47.470067] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.052 [2024-04-24 20:57:47.470122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.052 [2024-04-24 20:57:47.470136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.052 [2024-04-24 20:57:47.470144] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.470150] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.470164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.480104] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.480162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.480176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.480183] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.480189] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.480204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.490129] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.490217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.490231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.490240] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.490246] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.490261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.500156] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.500208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.500222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.500230] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.500236] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.500250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.510172] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.510242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.510257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.510264] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.510270] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.510285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.520211] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.520269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.520284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.520291] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.520297] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.520311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.530236] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.530292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.530307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.530314] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.530327] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.530341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.540247] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.540297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.540311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.540319] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.540325] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.540339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.550296] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.550355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.550369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.550377] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.550383] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.550397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.560310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.560370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.560384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.560391] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.560397] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.560411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.570343] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.570398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.570412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.570419] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.570425] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.570439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.580373] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.580430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.580444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.580451] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.580457] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.580472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.590436] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.590494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.590508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.590515] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.590522] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.590535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.600445] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.053 [2024-04-24 20:57:47.600509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.053 [2024-04-24 20:57:47.600534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.053 [2024-04-24 20:57:47.600543] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.053 [2024-04-24 20:57:47.600549] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.053 [2024-04-24 20:57:47.600569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.053 qpair failed and we were unable to recover it. 00:26:23.053 [2024-04-24 20:57:47.610501] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.054 [2024-04-24 20:57:47.610565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.054 [2024-04-24 20:57:47.610589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.054 [2024-04-24 20:57:47.610598] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.054 [2024-04-24 20:57:47.610605] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.054 [2024-04-24 20:57:47.610624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-04-24 20:57:47.620525] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.054 [2024-04-24 20:57:47.620579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.054 [2024-04-24 20:57:47.620596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.054 [2024-04-24 20:57:47.620608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.054 [2024-04-24 20:57:47.620615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.054 [2024-04-24 20:57:47.620630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-04-24 20:57:47.630424] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.054 [2024-04-24 20:57:47.630478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.054 [2024-04-24 20:57:47.630494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.054 [2024-04-24 20:57:47.630502] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.054 [2024-04-24 20:57:47.630509] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.054 [2024-04-24 20:57:47.630524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-04-24 20:57:47.640557] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.054 [2024-04-24 20:57:47.640617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.054 [2024-04-24 20:57:47.640632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.054 [2024-04-24 20:57:47.640639] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.054 [2024-04-24 20:57:47.640646] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.054 [2024-04-24 20:57:47.640660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-04-24 20:57:47.650591] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.054 [2024-04-24 20:57:47.650691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.054 [2024-04-24 20:57:47.650705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.054 [2024-04-24 20:57:47.650712] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.054 [2024-04-24 20:57:47.650719] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.054 [2024-04-24 20:57:47.650740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-04-24 20:57:47.660568] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.054 [2024-04-24 20:57:47.660622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.054 [2024-04-24 20:57:47.660636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.054 [2024-04-24 20:57:47.660643] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.054 [2024-04-24 20:57:47.660650] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.054 [2024-04-24 20:57:47.660664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-04-24 20:57:47.670645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.054 [2024-04-24 20:57:47.670700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.054 [2024-04-24 20:57:47.670714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.054 [2024-04-24 20:57:47.670721] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.054 [2024-04-24 20:57:47.670734] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.054 [2024-04-24 20:57:47.670749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-04-24 20:57:47.680689] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.054 [2024-04-24 20:57:47.680750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.054 [2024-04-24 20:57:47.680764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.054 [2024-04-24 20:57:47.680771] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.054 [2024-04-24 20:57:47.680777] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.054 [2024-04-24 20:57:47.680792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.054 [2024-04-24 20:57:47.690694] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.054 [2024-04-24 20:57:47.690752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.054 [2024-04-24 20:57:47.690767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.054 [2024-04-24 20:57:47.690774] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.054 [2024-04-24 20:57:47.690780] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.054 [2024-04-24 20:57:47.690795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.054 qpair failed and we were unable to recover it. 00:26:23.318 [2024-04-24 20:57:47.700745] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.318 [2024-04-24 20:57:47.700834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.318 [2024-04-24 20:57:47.700849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.318 [2024-04-24 20:57:47.700857] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.318 [2024-04-24 20:57:47.700864] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.318 [2024-04-24 20:57:47.700878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.318 qpair failed and we were unable to recover it. 00:26:23.318 [2024-04-24 20:57:47.710763] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.318 [2024-04-24 20:57:47.710823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.318 [2024-04-24 20:57:47.710838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.318 [2024-04-24 20:57:47.710849] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.318 [2024-04-24 20:57:47.710856] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.318 [2024-04-24 20:57:47.710870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.318 qpair failed and we were unable to recover it. 00:26:23.318 [2024-04-24 20:57:47.720843] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.318 [2024-04-24 20:57:47.720933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.318 [2024-04-24 20:57:47.720948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.318 [2024-04-24 20:57:47.720956] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.318 [2024-04-24 20:57:47.720962] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.318 [2024-04-24 20:57:47.720977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.318 qpair failed and we were unable to recover it. 00:26:23.318 [2024-04-24 20:57:47.730796] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.318 [2024-04-24 20:57:47.730850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.318 [2024-04-24 20:57:47.730865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.318 [2024-04-24 20:57:47.730872] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.318 [2024-04-24 20:57:47.730879] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.318 [2024-04-24 20:57:47.730893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.318 qpair failed and we were unable to recover it. 00:26:23.318 [2024-04-24 20:57:47.740838] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.318 [2024-04-24 20:57:47.740896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.318 [2024-04-24 20:57:47.740912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.318 [2024-04-24 20:57:47.740920] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.318 [2024-04-24 20:57:47.740929] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.318 [2024-04-24 20:57:47.740944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.318 qpair failed and we were unable to recover it. 00:26:23.318 [2024-04-24 20:57:47.750877] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.318 [2024-04-24 20:57:47.750931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.318 [2024-04-24 20:57:47.750946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.318 [2024-04-24 20:57:47.750954] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.318 [2024-04-24 20:57:47.750960] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.318 [2024-04-24 20:57:47.750974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.318 qpair failed and we were unable to recover it. 00:26:23.318 [2024-04-24 20:57:47.760785] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.318 [2024-04-24 20:57:47.760849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.318 [2024-04-24 20:57:47.760864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.318 [2024-04-24 20:57:47.760871] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.318 [2024-04-24 20:57:47.760877] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.318 [2024-04-24 20:57:47.760892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.318 qpair failed and we were unable to recover it. 00:26:23.318 [2024-04-24 20:57:47.770916] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.318 [2024-04-24 20:57:47.770969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.318 [2024-04-24 20:57:47.770984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.318 [2024-04-24 20:57:47.770991] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.318 [2024-04-24 20:57:47.770997] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.318 [2024-04-24 20:57:47.771013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.318 qpair failed and we were unable to recover it. 00:26:23.318 [2024-04-24 20:57:47.780954] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.318 [2024-04-24 20:57:47.781004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.318 [2024-04-24 20:57:47.781018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.318 [2024-04-24 20:57:47.781026] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.318 [2024-04-24 20:57:47.781032] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.781046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.790976] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.791031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.791045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.791052] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.791058] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.791072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.801013] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.801076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.801094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.801101] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.801107] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.801122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.810919] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.810974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.810988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.810995] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.811001] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.811015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.821060] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.821112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.821127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.821134] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.821140] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.821154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.831084] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.831139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.831153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.831160] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.831166] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.831180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.841085] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.841158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.841173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.841180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.841186] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.841204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.851136] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.851191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.851205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.851212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.851219] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.851233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.861167] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.861221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.861235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.861242] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.861249] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.861263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.871203] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.871256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.871271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.871278] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.871284] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.871298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.881222] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.881283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.881297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.881304] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.881310] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.881324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.891228] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.891310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.891328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.891336] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.891342] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.891356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.901271] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.901322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.901337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.901344] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.901350] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.901364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.911293] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.319 [2024-04-24 20:57:47.911352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.319 [2024-04-24 20:57:47.911367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.319 [2024-04-24 20:57:47.911374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.319 [2024-04-24 20:57:47.911381] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.319 [2024-04-24 20:57:47.911394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.319 qpair failed and we were unable to recover it. 00:26:23.319 [2024-04-24 20:57:47.921303] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.320 [2024-04-24 20:57:47.921363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.320 [2024-04-24 20:57:47.921378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.320 [2024-04-24 20:57:47.921385] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.320 [2024-04-24 20:57:47.921392] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.320 [2024-04-24 20:57:47.921405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.320 qpair failed and we were unable to recover it. 00:26:23.320 [2024-04-24 20:57:47.931341] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.320 [2024-04-24 20:57:47.931403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.320 [2024-04-24 20:57:47.931418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.320 [2024-04-24 20:57:47.931425] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.320 [2024-04-24 20:57:47.931435] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.320 [2024-04-24 20:57:47.931449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.320 qpair failed and we were unable to recover it. 00:26:23.320 [2024-04-24 20:57:47.941366] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.320 [2024-04-24 20:57:47.941460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.320 [2024-04-24 20:57:47.941485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.320 [2024-04-24 20:57:47.941494] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.320 [2024-04-24 20:57:47.941500] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.320 [2024-04-24 20:57:47.941519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.320 qpair failed and we were unable to recover it. 00:26:23.320 [2024-04-24 20:57:47.951405] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.320 [2024-04-24 20:57:47.951490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.320 [2024-04-24 20:57:47.951507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.320 [2024-04-24 20:57:47.951514] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.320 [2024-04-24 20:57:47.951521] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.320 [2024-04-24 20:57:47.951536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.320 qpair failed and we were unable to recover it. 00:26:23.582 [2024-04-24 20:57:47.961443] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.582 [2024-04-24 20:57:47.961508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.582 [2024-04-24 20:57:47.961533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.582 [2024-04-24 20:57:47.961541] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.582 [2024-04-24 20:57:47.961548] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.582 [2024-04-24 20:57:47.961567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.582 qpair failed and we were unable to recover it. 00:26:23.582 [2024-04-24 20:57:47.971459] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.582 [2024-04-24 20:57:47.971518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.582 [2024-04-24 20:57:47.971543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.582 [2024-04-24 20:57:47.971552] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.582 [2024-04-24 20:57:47.971559] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.582 [2024-04-24 20:57:47.971578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.582 qpair failed and we were unable to recover it. 00:26:23.582 [2024-04-24 20:57:47.981520] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.582 [2024-04-24 20:57:47.981581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.582 [2024-04-24 20:57:47.981606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.582 [2024-04-24 20:57:47.981615] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.582 [2024-04-24 20:57:47.981621] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.582 [2024-04-24 20:57:47.981640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.582 qpair failed and we were unable to recover it. 00:26:23.582 [2024-04-24 20:57:47.991526] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.582 [2024-04-24 20:57:47.991588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.582 [2024-04-24 20:57:47.991604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.582 [2024-04-24 20:57:47.991611] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.582 [2024-04-24 20:57:47.991617] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.582 [2024-04-24 20:57:47.991633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.582 qpair failed and we were unable to recover it. 00:26:23.582 [2024-04-24 20:57:48.001552] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.582 [2024-04-24 20:57:48.001610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.582 [2024-04-24 20:57:48.001626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.582 [2024-04-24 20:57:48.001633] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.582 [2024-04-24 20:57:48.001639] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.582 [2024-04-24 20:57:48.001653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.582 qpair failed and we were unable to recover it. 00:26:23.582 [2024-04-24 20:57:48.011576] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.582 [2024-04-24 20:57:48.011632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.582 [2024-04-24 20:57:48.011647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.011654] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.011661] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.011675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.021604] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.021666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.583 [2024-04-24 20:57:48.021681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.021692] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.021699] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.021713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.031632] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.031695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.583 [2024-04-24 20:57:48.031710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.031717] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.031724] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.031744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.041730] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.041840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.583 [2024-04-24 20:57:48.041855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.041862] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.041869] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.041883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.051682] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.051747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.583 [2024-04-24 20:57:48.051762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.051768] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.051775] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.051789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.061717] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.061774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.583 [2024-04-24 20:57:48.061789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.061796] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.061802] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.061817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.071719] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.071777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.583 [2024-04-24 20:57:48.071792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.071799] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.071805] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.071819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.081780] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.081867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.583 [2024-04-24 20:57:48.081881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.081888] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.081894] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.081908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.091802] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.091896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.583 [2024-04-24 20:57:48.091911] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.091918] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.091924] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.091939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.101831] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.101888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.583 [2024-04-24 20:57:48.101902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.101909] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.101916] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.101930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.111864] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.111934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.583 [2024-04-24 20:57:48.111948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.111958] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.111965] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.111980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.121888] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.121950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.583 [2024-04-24 20:57:48.121965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.121972] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.121980] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.121995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.131908] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.131965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.583 [2024-04-24 20:57:48.131980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.583 [2024-04-24 20:57:48.131988] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.583 [2024-04-24 20:57:48.131994] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.583 [2024-04-24 20:57:48.132008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.583 qpair failed and we were unable to recover it. 00:26:23.583 [2024-04-24 20:57:48.141967] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.583 [2024-04-24 20:57:48.142028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.584 [2024-04-24 20:57:48.142042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.584 [2024-04-24 20:57:48.142049] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.584 [2024-04-24 20:57:48.142055] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.584 [2024-04-24 20:57:48.142069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.584 qpair failed and we were unable to recover it. 00:26:23.584 [2024-04-24 20:57:48.152033] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.584 [2024-04-24 20:57:48.152091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.584 [2024-04-24 20:57:48.152105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.584 [2024-04-24 20:57:48.152112] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.584 [2024-04-24 20:57:48.152119] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.584 [2024-04-24 20:57:48.152132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.584 qpair failed and we were unable to recover it. 00:26:23.584 [2024-04-24 20:57:48.161892] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.584 [2024-04-24 20:57:48.161954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.584 [2024-04-24 20:57:48.161968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.584 [2024-04-24 20:57:48.161975] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.584 [2024-04-24 20:57:48.161981] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.584 [2024-04-24 20:57:48.161995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.584 qpair failed and we were unable to recover it. 00:26:23.584 [2024-04-24 20:57:48.172066] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.584 [2024-04-24 20:57:48.172117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.584 [2024-04-24 20:57:48.172131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.584 [2024-04-24 20:57:48.172138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.584 [2024-04-24 20:57:48.172144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.584 [2024-04-24 20:57:48.172158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.584 qpair failed and we were unable to recover it. 00:26:23.584 [2024-04-24 20:57:48.182063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.584 [2024-04-24 20:57:48.182116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.584 [2024-04-24 20:57:48.182131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.584 [2024-04-24 20:57:48.182138] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.584 [2024-04-24 20:57:48.182144] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.584 [2024-04-24 20:57:48.182158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.584 qpair failed and we were unable to recover it. 00:26:23.584 [2024-04-24 20:57:48.192103] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.584 [2024-04-24 20:57:48.192166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.584 [2024-04-24 20:57:48.192181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.584 [2024-04-24 20:57:48.192188] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.584 [2024-04-24 20:57:48.192194] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.584 [2024-04-24 20:57:48.192208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.584 qpair failed and we were unable to recover it. 00:26:23.584 [2024-04-24 20:57:48.202087] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.584 [2024-04-24 20:57:48.202141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.584 [2024-04-24 20:57:48.202159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.584 [2024-04-24 20:57:48.202166] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.584 [2024-04-24 20:57:48.202172] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.584 [2024-04-24 20:57:48.202186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.584 qpair failed and we were unable to recover it. 00:26:23.584 [2024-04-24 20:57:48.212137] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.584 [2024-04-24 20:57:48.212192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.584 [2024-04-24 20:57:48.212207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.584 [2024-04-24 20:57:48.212214] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.584 [2024-04-24 20:57:48.212220] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.584 [2024-04-24 20:57:48.212234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.584 qpair failed and we were unable to recover it. 00:26:23.846 [2024-04-24 20:57:48.222131] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.846 [2024-04-24 20:57:48.222190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.846 [2024-04-24 20:57:48.222204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.846 [2024-04-24 20:57:48.222212] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.846 [2024-04-24 20:57:48.222218] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.846 [2024-04-24 20:57:48.222232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.846 qpair failed and we were unable to recover it. 00:26:23.846 [2024-04-24 20:57:48.232196] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.846 [2024-04-24 20:57:48.232249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.846 [2024-04-24 20:57:48.232264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.846 [2024-04-24 20:57:48.232271] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.846 [2024-04-24 20:57:48.232278] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.846 [2024-04-24 20:57:48.232292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.846 qpair failed and we were unable to recover it. 00:26:23.846 [2024-04-24 20:57:48.242208] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.846 [2024-04-24 20:57:48.242270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.846 [2024-04-24 20:57:48.242285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.846 [2024-04-24 20:57:48.242292] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.846 [2024-04-24 20:57:48.242298] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.846 [2024-04-24 20:57:48.242315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.846 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.252240] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.252292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.252306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.252313] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.252320] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.252334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.262310] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.262362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.262376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.262383] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.262390] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.262403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.272292] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.272353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.272367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.272374] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.272380] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.272394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.282319] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.282381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.282395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.282402] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.282409] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.282423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.292346] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.292438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.292459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.292466] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.292473] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.292487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.302385] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.302443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.302468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.302476] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.302483] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.302502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.312572] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.312663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.312679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.312686] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.312693] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.312708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.322330] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.322399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.322423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.322432] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.322439] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.322457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.332475] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.332536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.332560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.332569] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.332581] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.332600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.342370] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.342426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.342442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.342450] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.342456] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.342471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.352532] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.352586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.352601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.352608] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.352615] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.352629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.362434] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.362531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.362547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.362554] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.362561] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.362576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.372548] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.372604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.372618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.847 [2024-04-24 20:57:48.372625] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.847 [2024-04-24 20:57:48.372632] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.847 [2024-04-24 20:57:48.372646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.847 qpair failed and we were unable to recover it. 00:26:23.847 [2024-04-24 20:57:48.382620] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.847 [2024-04-24 20:57:48.382683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.847 [2024-04-24 20:57:48.382698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.848 [2024-04-24 20:57:48.382705] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.848 [2024-04-24 20:57:48.382712] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.848 [2024-04-24 20:57:48.382730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.848 qpair failed and we were unable to recover it. 00:26:23.848 [2024-04-24 20:57:48.392645] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.848 [2024-04-24 20:57:48.392698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.848 [2024-04-24 20:57:48.392713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.848 [2024-04-24 20:57:48.392720] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.848 [2024-04-24 20:57:48.392730] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.848 [2024-04-24 20:57:48.392745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.848 qpair failed and we were unable to recover it. 00:26:23.848 [2024-04-24 20:57:48.402663] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.848 [2024-04-24 20:57:48.402723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.848 [2024-04-24 20:57:48.402741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.848 [2024-04-24 20:57:48.402749] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.848 [2024-04-24 20:57:48.402755] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.848 [2024-04-24 20:57:48.402770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.848 qpair failed and we were unable to recover it. 00:26:23.848 [2024-04-24 20:57:48.412713] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.848 [2024-04-24 20:57:48.412767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.848 [2024-04-24 20:57:48.412781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.848 [2024-04-24 20:57:48.412788] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.848 [2024-04-24 20:57:48.412795] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.848 [2024-04-24 20:57:48.412809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.848 qpair failed and we were unable to recover it. 00:26:23.848 [2024-04-24 20:57:48.422742] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.848 [2024-04-24 20:57:48.422796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.848 [2024-04-24 20:57:48.422811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.848 [2024-04-24 20:57:48.422818] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.848 [2024-04-24 20:57:48.422828] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.848 [2024-04-24 20:57:48.422842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.848 qpair failed and we were unable to recover it. 00:26:23.848 [2024-04-24 20:57:48.432782] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.848 [2024-04-24 20:57:48.432837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.848 [2024-04-24 20:57:48.432851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.848 [2024-04-24 20:57:48.432859] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.848 [2024-04-24 20:57:48.432865] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.848 [2024-04-24 20:57:48.432879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.848 qpair failed and we were unable to recover it. 00:26:23.848 [2024-04-24 20:57:48.442665] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.848 [2024-04-24 20:57:48.442735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.848 [2024-04-24 20:57:48.442749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.848 [2024-04-24 20:57:48.442757] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.848 [2024-04-24 20:57:48.442763] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.848 [2024-04-24 20:57:48.442777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.848 qpair failed and we were unable to recover it. 00:26:23.848 [2024-04-24 20:57:48.452833] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.848 [2024-04-24 20:57:48.452887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.848 [2024-04-24 20:57:48.452901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.848 [2024-04-24 20:57:48.452908] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.848 [2024-04-24 20:57:48.452914] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.848 [2024-04-24 20:57:48.452928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.848 qpair failed and we were unable to recover it. 00:26:23.848 [2024-04-24 20:57:48.462817] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.848 [2024-04-24 20:57:48.462899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.848 [2024-04-24 20:57:48.462913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.848 [2024-04-24 20:57:48.462921] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.848 [2024-04-24 20:57:48.462927] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.848 [2024-04-24 20:57:48.462941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.848 qpair failed and we were unable to recover it. 00:26:23.848 [2024-04-24 20:57:48.472867] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.848 [2024-04-24 20:57:48.472964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.848 [2024-04-24 20:57:48.472979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.848 [2024-04-24 20:57:48.472986] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.848 [2024-04-24 20:57:48.472992] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.848 [2024-04-24 20:57:48.473007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.848 qpair failed and we were unable to recover it. 00:26:23.848 [2024-04-24 20:57:48.482787] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:23.848 [2024-04-24 20:57:48.482844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:23.848 [2024-04-24 20:57:48.482858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:23.848 [2024-04-24 20:57:48.482865] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:23.848 [2024-04-24 20:57:48.482871] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:23.848 [2024-04-24 20:57:48.482885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:23.848 qpair failed and we were unable to recover it. 00:26:24.110 [2024-04-24 20:57:48.492862] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.110 [2024-04-24 20:57:48.492953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.110 [2024-04-24 20:57:48.492968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.110 [2024-04-24 20:57:48.492976] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.110 [2024-04-24 20:57:48.492982] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:24.110 [2024-04-24 20:57:48.492996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.110 qpair failed and we were unable to recover it. 00:26:24.110 [2024-04-24 20:57:48.502999] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.110 [2024-04-24 20:57:48.503086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.110 [2024-04-24 20:57:48.503101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.110 [2024-04-24 20:57:48.503108] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.110 [2024-04-24 20:57:48.503114] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd78000b90 00:26:24.110 [2024-04-24 20:57:48.503129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:26:24.110 qpair failed and we were unable to recover it. 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 [2024-04-24 20:57:48.503535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:24.110 [2024-04-24 20:57:48.513051] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.110 [2024-04-24 20:57:48.513115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.110 [2024-04-24 20:57:48.513141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.110 [2024-04-24 20:57:48.513149] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.110 [2024-04-24 20:57:48.513157] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd88000b90 00:26:24.110 [2024-04-24 20:57:48.513176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:24.110 qpair failed and we were unable to recover it. 00:26:24.110 [2024-04-24 20:57:48.523018] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.110 [2024-04-24 20:57:48.523082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.110 [2024-04-24 20:57:48.523099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.110 [2024-04-24 20:57:48.523106] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.110 [2024-04-24 20:57:48.523113] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd88000b90 00:26:24.110 [2024-04-24 20:57:48.523128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:26:24.110 qpair failed and we were unable to recover it. 00:26:24.110 [2024-04-24 20:57:48.533095] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.110 [2024-04-24 20:57:48.533213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.110 [2024-04-24 20:57:48.533277] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.110 [2024-04-24 20:57:48.533301] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.110 [2024-04-24 20:57:48.533330] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaeb290 00:26:24.110 [2024-04-24 20:57:48.533380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.110 qpair failed and we were unable to recover it. 00:26:24.110 [2024-04-24 20:57:48.543063] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.110 [2024-04-24 20:57:48.543149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.110 [2024-04-24 20:57:48.543182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.110 [2024-04-24 20:57:48.543198] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.110 [2024-04-24 20:57:48.543212] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xaeb290 00:26:24.110 [2024-04-24 20:57:48.543241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:26:24.110 qpair failed and we were unable to recover it. 00:26:24.110 [2024-04-24 20:57:48.543643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae9ef0 is same with the state(5) to be set 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Read completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.110 starting I/O failed 00:26:24.110 Write completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Write completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Read completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Write completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Write completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Read completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Write completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Write completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Write completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Write completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Write completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Read completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Write completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Read completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Read completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Write completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Read completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 Read completed with error (sct=0, sc=8) 00:26:24.111 starting I/O failed 00:26:24.111 [2024-04-24 20:57:48.544007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:24.111 [2024-04-24 20:57:48.553106] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.111 [2024-04-24 20:57:48.553158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.111 [2024-04-24 20:57:48.553173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.111 [2024-04-24 20:57:48.553178] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.111 [2024-04-24 20:57:48.553187] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd80000b90 00:26:24.111 [2024-04-24 20:57:48.553199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:24.111 qpair failed and we were unable to recover it. 00:26:24.111 [2024-04-24 20:57:48.563108] ctrlr.c: 706:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:24.111 [2024-04-24 20:57:48.563163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:24.111 [2024-04-24 20:57:48.563175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:24.111 [2024-04-24 20:57:48.563180] nvme_tcp.c:2423:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:24.111 [2024-04-24 20:57:48.563185] nvme_tcp.c:2213:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcd80000b90 00:26:24.111 [2024-04-24 20:57:48.563195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:26:24.111 qpair failed and we were unable to recover it. 00:26:24.111 [2024-04-24 20:57:48.563690] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae9ef0 (9): Bad file descriptor 00:26:24.111 Initializing NVMe Controllers 00:26:24.111 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:24.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:24.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:24.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:24.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:24.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:24.111 Initialization complete. Launching workers. 00:26:24.111 Starting thread on core 1 00:26:24.111 Starting thread on core 2 00:26:24.111 Starting thread on core 3 00:26:24.111 Starting thread on core 0 00:26:24.111 20:57:48 -- host/target_disconnect.sh@59 -- # sync 00:26:24.111 00:26:24.111 real 0m11.370s 00:26:24.111 user 0m22.001s 00:26:24.111 sys 0m3.641s 00:26:24.111 20:57:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:24.111 20:57:48 -- common/autotest_common.sh@10 -- # set +x 00:26:24.111 ************************************ 00:26:24.111 END TEST nvmf_target_disconnect_tc2 00:26:24.111 ************************************ 00:26:24.111 20:57:48 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:26:24.111 20:57:48 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:24.111 20:57:48 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:26:24.111 20:57:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:24.111 20:57:48 -- nvmf/common.sh@117 -- # sync 00:26:24.111 20:57:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:24.111 20:57:48 -- nvmf/common.sh@120 -- # set +e 00:26:24.111 20:57:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:24.111 20:57:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:24.111 rmmod nvme_tcp 00:26:24.111 rmmod nvme_fabrics 00:26:24.111 rmmod nvme_keyring 00:26:24.111 20:57:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:24.111 20:57:48 -- nvmf/common.sh@124 -- # set -e 00:26:24.111 20:57:48 -- nvmf/common.sh@125 -- # return 0 00:26:24.111 20:57:48 -- nvmf/common.sh@478 -- # '[' -n 2936405 ']' 00:26:24.111 20:57:48 -- nvmf/common.sh@479 -- # killprocess 2936405 00:26:24.111 20:57:48 -- common/autotest_common.sh@936 -- # '[' -z 2936405 ']' 00:26:24.111 20:57:48 -- common/autotest_common.sh@940 -- # kill -0 2936405 00:26:24.111 20:57:48 -- common/autotest_common.sh@941 -- # uname 00:26:24.111 20:57:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:24.111 20:57:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2936405 00:26:24.111 20:57:48 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:26:24.111 20:57:48 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:26:24.111 20:57:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2936405' 00:26:24.111 killing process with pid 2936405 00:26:24.111 20:57:48 -- common/autotest_common.sh@955 -- # kill 2936405 00:26:24.111 20:57:48 -- common/autotest_common.sh@960 -- # wait 2936405 00:26:24.371 20:57:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:24.371 20:57:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:24.371 20:57:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:24.371 20:57:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:24.371 20:57:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:24.371 20:57:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.371 20:57:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.371 20:57:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.918 20:57:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:26.918 00:26:26.918 real 0m21.745s 00:26:26.918 user 0m49.382s 00:26:26.918 sys 0m9.777s 00:26:26.918 20:57:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:26.918 20:57:50 -- common/autotest_common.sh@10 -- # set +x 00:26:26.918 ************************************ 00:26:26.918 END TEST nvmf_target_disconnect 00:26:26.918 ************************************ 00:26:26.918 20:57:51 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:26:26.918 20:57:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:26.918 20:57:51 -- common/autotest_common.sh@10 -- # set +x 00:26:26.918 20:57:51 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:26:26.918 00:26:26.918 real 19m33.772s 00:26:26.918 user 40m11.234s 00:26:26.918 sys 6m33.900s 00:26:26.918 20:57:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:26.918 20:57:51 -- common/autotest_common.sh@10 -- # set +x 00:26:26.918 ************************************ 00:26:26.918 END TEST nvmf_tcp 00:26:26.918 ************************************ 00:26:26.918 20:57:51 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:26:26.918 20:57:51 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:26.918 20:57:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:26.918 20:57:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:26.918 20:57:51 -- common/autotest_common.sh@10 -- # set +x 00:26:26.918 ************************************ 00:26:26.918 START TEST spdkcli_nvmf_tcp 00:26:26.918 ************************************ 00:26:26.918 20:57:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:26.918 * Looking for test storage... 00:26:26.918 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:26:26.918 20:57:51 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:26:26.918 20:57:51 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:26.918 20:57:51 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:26:26.918 20:57:51 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.918 20:57:51 -- nvmf/common.sh@7 -- # uname -s 00:26:26.918 20:57:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.918 20:57:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.918 20:57:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.918 20:57:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.918 20:57:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.918 20:57:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.918 20:57:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.918 20:57:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.918 20:57:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.918 20:57:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.918 20:57:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:26.918 20:57:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:26.918 20:57:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.918 20:57:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.918 20:57:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.918 20:57:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.918 20:57:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.918 20:57:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.918 20:57:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.918 20:57:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.918 20:57:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.918 20:57:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.918 20:57:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.918 20:57:51 -- paths/export.sh@5 -- # export PATH 00:26:26.918 20:57:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.918 20:57:51 -- nvmf/common.sh@47 -- # : 0 00:26:26.918 20:57:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:26.918 20:57:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:26.918 20:57:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.918 20:57:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.918 20:57:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.918 20:57:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:26.918 20:57:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:26.918 20:57:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:26.918 20:57:51 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:26.918 20:57:51 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:26.918 20:57:51 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:26.918 20:57:51 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:26.918 20:57:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:26.918 20:57:51 -- common/autotest_common.sh@10 -- # set +x 00:26:26.918 20:57:51 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:26.918 20:57:51 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2938332 00:26:26.918 20:57:51 -- spdkcli/common.sh@34 -- # waitforlisten 2938332 00:26:26.918 20:57:51 -- common/autotest_common.sh@817 -- # '[' -z 2938332 ']' 00:26:26.918 20:57:51 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:26.918 20:57:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.918 20:57:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:26.918 20:57:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.919 20:57:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:26.919 20:57:51 -- common/autotest_common.sh@10 -- # set +x 00:26:26.919 [2024-04-24 20:57:51.433642] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:26:26.919 [2024-04-24 20:57:51.433710] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2938332 ] 00:26:26.919 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.919 [2024-04-24 20:57:51.514482] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:27.179 [2024-04-24 20:57:51.602377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.179 [2024-04-24 20:57:51.602385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.749 20:57:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:27.749 20:57:52 -- common/autotest_common.sh@850 -- # return 0 00:26:27.749 20:57:52 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:27.749 20:57:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:27.749 20:57:52 -- common/autotest_common.sh@10 -- # set +x 00:26:27.749 20:57:52 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:27.749 20:57:52 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:27.749 20:57:52 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:27.749 20:57:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:27.749 20:57:52 -- common/autotest_common.sh@10 -- # set +x 00:26:27.749 20:57:52 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:27.749 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:27.749 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:27.749 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:27.749 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:27.749 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:27.749 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:27.749 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:27.749 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:27.749 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:27.749 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:27.749 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:27.749 ' 00:26:28.320 [2024-04-24 20:57:52.729133] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:30.271 [2024-04-24 20:57:54.732259] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.655 [2024-04-24 20:57:55.900091] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:33.567 [2024-04-24 20:57:58.046308] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:35.479 [2024-04-24 20:57:59.887890] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:36.862 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:36.862 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:36.862 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:36.862 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:36.862 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:36.862 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:36.862 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:36.862 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:36.862 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:36.862 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:36.862 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:36.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:36.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:36.863 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:36.863 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:36.863 20:58:01 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:36.863 20:58:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:36.863 20:58:01 -- common/autotest_common.sh@10 -- # set +x 00:26:36.863 20:58:01 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:36.863 20:58:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:36.863 20:58:01 -- common/autotest_common.sh@10 -- # set +x 00:26:36.863 20:58:01 -- spdkcli/nvmf.sh@69 -- # check_match 00:26:36.863 20:58:01 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:37.433 20:58:01 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:37.433 20:58:01 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:37.433 20:58:01 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:37.433 20:58:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:37.433 20:58:01 -- common/autotest_common.sh@10 -- # set +x 00:26:37.433 20:58:01 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:37.433 20:58:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:37.433 20:58:01 -- common/autotest_common.sh@10 -- # set +x 00:26:37.433 20:58:01 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:37.433 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:37.433 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:37.433 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:37.433 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:37.433 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:37.433 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:37.433 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:37.433 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:37.433 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:37.433 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:37.433 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:37.433 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:37.433 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:37.433 ' 00:26:42.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:42.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:42.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:42.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:42.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:42.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:42.718 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:42.718 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:42.718 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:42.718 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:42.718 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:42.718 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:42.718 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:42.718 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:42.718 20:58:06 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:42.718 20:58:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:42.718 20:58:06 -- common/autotest_common.sh@10 -- # set +x 00:26:42.718 20:58:06 -- spdkcli/nvmf.sh@90 -- # killprocess 2938332 00:26:42.718 20:58:06 -- common/autotest_common.sh@936 -- # '[' -z 2938332 ']' 00:26:42.718 20:58:06 -- common/autotest_common.sh@940 -- # kill -0 2938332 00:26:42.718 20:58:06 -- common/autotest_common.sh@941 -- # uname 00:26:42.718 20:58:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:42.718 20:58:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2938332 00:26:42.718 20:58:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:42.718 20:58:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:42.719 20:58:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2938332' 00:26:42.719 killing process with pid 2938332 00:26:42.719 20:58:06 -- common/autotest_common.sh@955 -- # kill 2938332 00:26:42.719 [2024-04-24 20:58:06.934788] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:42.719 20:58:06 -- common/autotest_common.sh@960 -- # wait 2938332 00:26:42.719 20:58:07 -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:42.719 20:58:07 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:42.719 20:58:07 -- spdkcli/common.sh@13 -- # '[' -n 2938332 ']' 00:26:42.719 20:58:07 -- spdkcli/common.sh@14 -- # killprocess 2938332 00:26:42.719 20:58:07 -- common/autotest_common.sh@936 -- # '[' -z 2938332 ']' 00:26:42.719 20:58:07 -- common/autotest_common.sh@940 -- # kill -0 2938332 00:26:42.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2938332) - No such process 00:26:42.719 20:58:07 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2938332 is not found' 00:26:42.719 Process with pid 2938332 is not found 00:26:42.719 20:58:07 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:42.719 20:58:07 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:42.719 20:58:07 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:42.719 00:26:42.719 real 0m15.825s 00:26:42.719 user 0m32.718s 00:26:42.719 sys 0m0.750s 00:26:42.719 20:58:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:42.719 20:58:07 -- common/autotest_common.sh@10 -- # set +x 00:26:42.719 ************************************ 00:26:42.719 END TEST spdkcli_nvmf_tcp 00:26:42.719 ************************************ 00:26:42.719 20:58:07 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:42.719 20:58:07 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:42.719 20:58:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:42.719 20:58:07 -- common/autotest_common.sh@10 -- # set +x 00:26:42.719 ************************************ 00:26:42.719 START TEST nvmf_identify_passthru 00:26:42.719 ************************************ 00:26:42.719 20:58:07 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:42.719 * Looking for test storage... 00:26:42.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:42.719 20:58:07 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.719 20:58:07 -- nvmf/common.sh@7 -- # uname -s 00:26:42.719 20:58:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.719 20:58:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.719 20:58:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.719 20:58:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.719 20:58:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.719 20:58:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.719 20:58:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.719 20:58:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.719 20:58:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.719 20:58:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.719 20:58:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:42.719 20:58:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:42.719 20:58:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.719 20:58:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.719 20:58:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.719 20:58:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.980 20:58:07 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.980 20:58:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.980 20:58:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.980 20:58:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.980 20:58:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.980 20:58:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.980 20:58:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.980 20:58:07 -- paths/export.sh@5 -- # export PATH 00:26:42.980 20:58:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.980 20:58:07 -- nvmf/common.sh@47 -- # : 0 00:26:42.980 20:58:07 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:42.980 20:58:07 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:42.980 20:58:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.980 20:58:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.980 20:58:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.980 20:58:07 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:42.980 20:58:07 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:42.980 20:58:07 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:42.980 20:58:07 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.980 20:58:07 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.980 20:58:07 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.980 20:58:07 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.980 20:58:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.980 20:58:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.980 20:58:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.980 20:58:07 -- paths/export.sh@5 -- # export PATH 00:26:42.980 20:58:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.980 20:58:07 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:42.980 20:58:07 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:42.980 20:58:07 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.980 20:58:07 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:42.980 20:58:07 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:42.980 20:58:07 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:42.980 20:58:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.980 20:58:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:42.980 20:58:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.980 20:58:07 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:42.981 20:58:07 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:42.981 20:58:07 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:42.981 20:58:07 -- common/autotest_common.sh@10 -- # set +x 00:26:51.123 20:58:14 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:51.123 20:58:14 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:51.123 20:58:14 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:51.123 20:58:14 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:51.123 20:58:14 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:51.123 20:58:14 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:51.123 20:58:14 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:51.123 20:58:14 -- nvmf/common.sh@295 -- # net_devs=() 00:26:51.123 20:58:14 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:51.123 20:58:14 -- nvmf/common.sh@296 -- # e810=() 00:26:51.123 20:58:14 -- nvmf/common.sh@296 -- # local -ga e810 00:26:51.123 20:58:14 -- nvmf/common.sh@297 -- # x722=() 00:26:51.123 20:58:14 -- nvmf/common.sh@297 -- # local -ga x722 00:26:51.123 20:58:14 -- nvmf/common.sh@298 -- # mlx=() 00:26:51.123 20:58:14 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:51.123 20:58:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.123 20:58:14 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.123 20:58:14 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.123 20:58:14 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.123 20:58:14 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.123 20:58:14 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.123 20:58:14 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.123 20:58:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.123 20:58:14 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.123 20:58:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.123 20:58:14 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.123 20:58:14 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:51.123 20:58:14 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:51.123 20:58:14 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:51.123 20:58:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.123 20:58:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:51.123 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:51.123 20:58:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:51.123 20:58:14 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:51.123 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:51.123 20:58:14 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:51.123 20:58:14 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.123 20:58:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.123 20:58:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:51.123 20:58:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.123 20:58:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:51.123 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:51.123 20:58:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.123 20:58:14 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:51.123 20:58:14 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.123 20:58:14 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:51.123 20:58:14 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.123 20:58:14 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:51.123 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:51.123 20:58:14 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.123 20:58:14 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:51.123 20:58:14 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:51.123 20:58:14 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:51.123 20:58:14 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:51.123 20:58:14 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.123 20:58:14 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:51.123 20:58:14 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:51.123 20:58:14 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:51.123 20:58:14 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:51.123 20:58:14 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:51.123 20:58:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:51.123 20:58:14 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.123 20:58:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:51.123 20:58:14 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:51.123 20:58:14 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:51.123 20:58:14 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:51.123 20:58:14 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:51.123 20:58:14 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:51.123 20:58:14 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:51.123 20:58:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:51.123 20:58:14 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:51.123 20:58:14 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:51.123 20:58:14 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:51.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:26:51.123 00:26:51.123 --- 10.0.0.2 ping statistics --- 00:26:51.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.123 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:26:51.123 20:58:14 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:51.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:26:51.123 00:26:51.123 --- 10.0.0.1 ping statistics --- 00:26:51.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.123 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:26:51.123 20:58:14 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.123 20:58:14 -- nvmf/common.sh@411 -- # return 0 00:26:51.123 20:58:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:51.123 20:58:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.123 20:58:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:51.123 20:58:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.123 20:58:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:51.123 20:58:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:51.123 20:58:14 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:51.123 20:58:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:51.123 20:58:14 -- common/autotest_common.sh@10 -- # set +x 00:26:51.123 20:58:14 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:51.123 20:58:14 -- common/autotest_common.sh@1510 -- # bdfs=() 00:26:51.123 20:58:14 -- common/autotest_common.sh@1510 -- # local bdfs 00:26:51.123 20:58:14 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:26:51.123 20:58:14 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:26:51.123 20:58:14 -- common/autotest_common.sh@1499 -- # bdfs=() 00:26:51.123 20:58:14 -- common/autotest_common.sh@1499 -- # local bdfs 00:26:51.123 20:58:14 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:51.123 20:58:14 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:51.123 20:58:14 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:26:51.123 20:58:14 -- common/autotest_common.sh@1501 -- # (( 1 == 0 )) 00:26:51.123 20:58:14 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:65:00.0 00:26:51.123 20:58:14 -- common/autotest_common.sh@1513 -- # echo 0000:65:00.0 00:26:51.123 20:58:14 -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:26:51.123 20:58:14 -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:26:51.123 20:58:14 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:26:51.123 20:58:14 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:51.123 20:58:14 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:51.123 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.123 20:58:15 -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605480 00:26:51.123 20:58:15 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:26:51.123 20:58:15 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:51.123 20:58:15 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:51.124 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.124 20:58:15 -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:26:51.124 20:58:15 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:51.124 20:58:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:51.124 20:58:15 -- common/autotest_common.sh@10 -- # set +x 00:26:51.384 20:58:15 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:51.384 20:58:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:51.384 20:58:15 -- common/autotest_common.sh@10 -- # set +x 00:26:51.384 20:58:15 -- target/identify_passthru.sh@31 -- # nvmfpid=2945286 00:26:51.384 20:58:15 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:51.384 20:58:15 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:51.384 20:58:15 -- target/identify_passthru.sh@35 -- # waitforlisten 2945286 00:26:51.384 20:58:15 -- common/autotest_common.sh@817 -- # '[' -z 2945286 ']' 00:26:51.384 20:58:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.384 20:58:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:51.384 20:58:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.384 20:58:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:51.384 20:58:15 -- common/autotest_common.sh@10 -- # set +x 00:26:51.384 [2024-04-24 20:58:15.847015] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:26:51.384 [2024-04-24 20:58:15.847081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.384 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.384 [2024-04-24 20:58:15.935617] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:51.645 [2024-04-24 20:58:16.028305] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.645 [2024-04-24 20:58:16.028364] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.645 [2024-04-24 20:58:16.028372] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.645 [2024-04-24 20:58:16.028379] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.645 [2024-04-24 20:58:16.028385] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.645 [2024-04-24 20:58:16.028525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.645 [2024-04-24 20:58:16.028655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:51.645 [2024-04-24 20:58:16.028799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.645 [2024-04-24 20:58:16.028799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:52.246 20:58:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:52.246 20:58:16 -- common/autotest_common.sh@850 -- # return 0 00:26:52.246 20:58:16 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:52.246 20:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.246 20:58:16 -- common/autotest_common.sh@10 -- # set +x 00:26:52.246 INFO: Log level set to 20 00:26:52.246 INFO: Requests: 00:26:52.246 { 00:26:52.246 "jsonrpc": "2.0", 00:26:52.246 "method": "nvmf_set_config", 00:26:52.246 "id": 1, 00:26:52.246 "params": { 00:26:52.246 "admin_cmd_passthru": { 00:26:52.246 "identify_ctrlr": true 00:26:52.246 } 00:26:52.246 } 00:26:52.246 } 00:26:52.246 00:26:52.246 INFO: response: 00:26:52.246 { 00:26:52.246 "jsonrpc": "2.0", 00:26:52.246 "id": 1, 00:26:52.246 "result": true 00:26:52.246 } 00:26:52.246 00:26:52.246 20:58:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.246 20:58:16 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:52.246 20:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.246 20:58:16 -- common/autotest_common.sh@10 -- # set +x 00:26:52.246 INFO: Setting log level to 20 00:26:52.246 INFO: Setting log level to 20 00:26:52.246 INFO: Log level set to 20 00:26:52.246 INFO: Log level set to 20 00:26:52.246 INFO: Requests: 00:26:52.246 { 00:26:52.246 "jsonrpc": "2.0", 00:26:52.246 "method": "framework_start_init", 00:26:52.246 "id": 1 00:26:52.246 } 00:26:52.246 00:26:52.246 INFO: Requests: 00:26:52.246 { 00:26:52.246 "jsonrpc": "2.0", 00:26:52.246 "method": "framework_start_init", 00:26:52.246 "id": 1 00:26:52.246 } 00:26:52.246 00:26:52.246 [2024-04-24 20:58:16.810140] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:52.246 INFO: response: 00:26:52.246 { 00:26:52.246 "jsonrpc": "2.0", 00:26:52.246 "id": 1, 00:26:52.246 "result": true 00:26:52.246 } 00:26:52.246 00:26:52.246 INFO: response: 00:26:52.246 { 00:26:52.246 "jsonrpc": "2.0", 00:26:52.246 "id": 1, 00:26:52.246 "result": true 00:26:52.246 } 00:26:52.246 00:26:52.246 20:58:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.246 20:58:16 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:52.246 20:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.246 20:58:16 -- common/autotest_common.sh@10 -- # set +x 00:26:52.246 INFO: Setting log level to 40 00:26:52.246 INFO: Setting log level to 40 00:26:52.246 INFO: Setting log level to 40 00:26:52.246 [2024-04-24 20:58:16.823391] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:52.246 20:58:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.246 20:58:16 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:52.246 20:58:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:52.246 20:58:16 -- common/autotest_common.sh@10 -- # set +x 00:26:52.246 20:58:16 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:26:52.246 20:58:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.246 20:58:16 -- common/autotest_common.sh@10 -- # set +x 00:26:52.817 Nvme0n1 00:26:52.817 20:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.817 20:58:17 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:52.817 20:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.817 20:58:17 -- common/autotest_common.sh@10 -- # set +x 00:26:52.817 20:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.817 20:58:17 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:52.817 20:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.817 20:58:17 -- common/autotest_common.sh@10 -- # set +x 00:26:52.817 20:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.817 20:58:17 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:52.817 20:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.817 20:58:17 -- common/autotest_common.sh@10 -- # set +x 00:26:52.817 [2024-04-24 20:58:17.205833] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:52.817 20:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.817 20:58:17 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:52.817 20:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:52.817 20:58:17 -- common/autotest_common.sh@10 -- # set +x 00:26:52.817 [2024-04-24 20:58:17.217614] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:52.817 [ 00:26:52.817 { 00:26:52.817 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:52.817 "subtype": "Discovery", 00:26:52.817 "listen_addresses": [], 00:26:52.817 "allow_any_host": true, 00:26:52.817 "hosts": [] 00:26:52.817 }, 00:26:52.817 { 00:26:52.817 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:52.817 "subtype": "NVMe", 00:26:52.817 "listen_addresses": [ 00:26:52.817 { 00:26:52.817 "transport": "TCP", 00:26:52.817 "trtype": "TCP", 00:26:52.817 "adrfam": "IPv4", 00:26:52.817 "traddr": "10.0.0.2", 00:26:52.817 "trsvcid": "4420" 00:26:52.817 } 00:26:52.817 ], 00:26:52.817 "allow_any_host": true, 00:26:52.817 "hosts": [], 00:26:52.817 "serial_number": "SPDK00000000000001", 00:26:52.817 "model_number": "SPDK bdev Controller", 00:26:52.817 "max_namespaces": 1, 00:26:52.817 "min_cntlid": 1, 00:26:52.817 "max_cntlid": 65519, 00:26:52.817 "namespaces": [ 00:26:52.817 { 00:26:52.817 "nsid": 1, 00:26:52.817 "bdev_name": "Nvme0n1", 00:26:52.817 "name": "Nvme0n1", 00:26:52.817 "nguid": "36344730526054800025384500000037", 00:26:52.817 "uuid": "36344730-5260-5480-0025-384500000037" 00:26:52.817 } 00:26:52.817 ] 00:26:52.817 } 00:26:52.817 ] 00:26:52.817 20:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:52.817 20:58:17 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:52.817 20:58:17 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:52.817 20:58:17 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:52.817 EAL: No free 2048 kB hugepages reported on node 1 00:26:52.817 20:58:17 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605480 00:26:52.817 20:58:17 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:52.817 20:58:17 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:52.817 20:58:17 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:52.817 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.077 20:58:17 -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:26:53.077 20:58:17 -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605480 '!=' S64GNE0R605480 ']' 00:26:53.077 20:58:17 -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:26:53.077 20:58:17 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.077 20:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:53.077 20:58:17 -- common/autotest_common.sh@10 -- # set +x 00:26:53.077 20:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:53.077 20:58:17 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:53.077 20:58:17 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:53.077 20:58:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:53.077 20:58:17 -- nvmf/common.sh@117 -- # sync 00:26:53.077 20:58:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:53.077 20:58:17 -- nvmf/common.sh@120 -- # set +e 00:26:53.077 20:58:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:53.077 20:58:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:53.077 rmmod nvme_tcp 00:26:53.077 rmmod nvme_fabrics 00:26:53.077 rmmod nvme_keyring 00:26:53.077 20:58:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:53.077 20:58:17 -- nvmf/common.sh@124 -- # set -e 00:26:53.077 20:58:17 -- nvmf/common.sh@125 -- # return 0 00:26:53.077 20:58:17 -- nvmf/common.sh@478 -- # '[' -n 2945286 ']' 00:26:53.077 20:58:17 -- nvmf/common.sh@479 -- # killprocess 2945286 00:26:53.077 20:58:17 -- common/autotest_common.sh@936 -- # '[' -z 2945286 ']' 00:26:53.077 20:58:17 -- common/autotest_common.sh@940 -- # kill -0 2945286 00:26:53.077 20:58:17 -- common/autotest_common.sh@941 -- # uname 00:26:53.077 20:58:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:53.077 20:58:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2945286 00:26:53.340 20:58:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:53.340 20:58:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:53.340 20:58:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2945286' 00:26:53.340 killing process with pid 2945286 00:26:53.340 20:58:17 -- common/autotest_common.sh@955 -- # kill 2945286 00:26:53.340 [2024-04-24 20:58:17.729471] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:53.340 20:58:17 -- common/autotest_common.sh@960 -- # wait 2945286 00:26:53.600 20:58:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:53.600 20:58:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:53.600 20:58:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:53.600 20:58:17 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:53.600 20:58:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:53.600 20:58:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.600 20:58:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:53.600 20:58:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.511 20:58:20 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:55.511 00:26:55.511 real 0m12.834s 00:26:55.511 user 0m10.291s 00:26:55.511 sys 0m6.216s 00:26:55.511 20:58:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:55.511 20:58:20 -- common/autotest_common.sh@10 -- # set +x 00:26:55.511 ************************************ 00:26:55.511 END TEST nvmf_identify_passthru 00:26:55.511 ************************************ 00:26:55.511 20:58:20 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:55.511 20:58:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:55.511 20:58:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:55.511 20:58:20 -- common/autotest_common.sh@10 -- # set +x 00:26:55.772 ************************************ 00:26:55.772 START TEST nvmf_dif 00:26:55.772 ************************************ 00:26:55.772 20:58:20 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:55.772 * Looking for test storage... 00:26:55.772 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:55.772 20:58:20 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.772 20:58:20 -- nvmf/common.sh@7 -- # uname -s 00:26:55.772 20:58:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.772 20:58:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.772 20:58:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.772 20:58:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.772 20:58:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.772 20:58:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.772 20:58:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.772 20:58:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.772 20:58:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.772 20:58:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.772 20:58:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:55.772 20:58:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:26:55.772 20:58:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.772 20:58:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.772 20:58:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.772 20:58:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.772 20:58:20 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.772 20:58:20 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.772 20:58:20 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.772 20:58:20 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.772 20:58:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.772 20:58:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.772 20:58:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.772 20:58:20 -- paths/export.sh@5 -- # export PATH 00:26:55.772 20:58:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.772 20:58:20 -- nvmf/common.sh@47 -- # : 0 00:26:55.772 20:58:20 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:55.772 20:58:20 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:55.772 20:58:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.772 20:58:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.772 20:58:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.772 20:58:20 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:55.772 20:58:20 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:55.772 20:58:20 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:55.772 20:58:20 -- target/dif.sh@15 -- # NULL_META=16 00:26:55.772 20:58:20 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:55.772 20:58:20 -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:55.772 20:58:20 -- target/dif.sh@15 -- # NULL_DIF=1 00:26:55.772 20:58:20 -- target/dif.sh@135 -- # nvmftestinit 00:26:55.772 20:58:20 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:55.772 20:58:20 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:55.772 20:58:20 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:55.772 20:58:20 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:55.772 20:58:20 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:55.772 20:58:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.772 20:58:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:55.772 20:58:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.033 20:58:20 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:26:56.033 20:58:20 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:56.033 20:58:20 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:56.033 20:58:20 -- common/autotest_common.sh@10 -- # set +x 00:27:02.618 20:58:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:02.618 20:58:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:02.618 20:58:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:02.618 20:58:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:02.618 20:58:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:02.618 20:58:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:02.618 20:58:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:02.618 20:58:27 -- nvmf/common.sh@295 -- # net_devs=() 00:27:02.618 20:58:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:02.618 20:58:27 -- nvmf/common.sh@296 -- # e810=() 00:27:02.618 20:58:27 -- nvmf/common.sh@296 -- # local -ga e810 00:27:02.618 20:58:27 -- nvmf/common.sh@297 -- # x722=() 00:27:02.618 20:58:27 -- nvmf/common.sh@297 -- # local -ga x722 00:27:02.618 20:58:27 -- nvmf/common.sh@298 -- # mlx=() 00:27:02.618 20:58:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:02.618 20:58:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:02.618 20:58:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:02.618 20:58:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:02.618 20:58:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:02.618 20:58:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:02.618 20:58:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:02.618 20:58:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:02.618 20:58:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:02.618 20:58:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:02.618 20:58:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:02.618 20:58:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:02.618 20:58:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:02.618 20:58:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:02.618 20:58:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:02.618 20:58:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.618 20:58:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:02.618 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:02.618 20:58:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:02.618 20:58:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:02.618 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:02.618 20:58:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:02.618 20:58:27 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.618 20:58:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.618 20:58:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:02.618 20:58:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.618 20:58:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:02.618 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:02.618 20:58:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.618 20:58:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:02.618 20:58:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:02.618 20:58:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:27:02.618 20:58:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:02.618 20:58:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:02.618 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:02.618 20:58:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:27:02.618 20:58:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:27:02.618 20:58:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:27:02.618 20:58:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:27:02.618 20:58:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:27:02.618 20:58:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.618 20:58:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.618 20:58:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:02.618 20:58:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:02.618 20:58:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:02.618 20:58:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:02.618 20:58:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:02.618 20:58:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:02.618 20:58:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.618 20:58:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:02.618 20:58:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:02.618 20:58:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:02.618 20:58:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:02.879 20:58:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:02.879 20:58:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:02.879 20:58:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:02.879 20:58:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:02.879 20:58:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:02.879 20:58:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:02.879 20:58:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:02.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.525 ms 00:27:02.879 00:27:02.879 --- 10.0.0.2 ping statistics --- 00:27:02.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.879 rtt min/avg/max/mdev = 0.525/0.525/0.525/0.000 ms 00:27:02.879 20:58:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:03.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:03.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:27:03.140 00:27:03.140 --- 10.0.0.1 ping statistics --- 00:27:03.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:03.140 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:27:03.140 20:58:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:03.140 20:58:27 -- nvmf/common.sh@411 -- # return 0 00:27:03.140 20:58:27 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:27:03.140 20:58:27 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:06.439 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:06.439 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:27:06.440 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:27:06.440 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:27:06.708 20:58:31 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:06.708 20:58:31 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:06.708 20:58:31 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:06.708 20:58:31 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:06.708 20:58:31 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:06.708 20:58:31 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:06.708 20:58:31 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:06.708 20:58:31 -- target/dif.sh@137 -- # nvmfappstart 00:27:06.708 20:58:31 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:06.708 20:58:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:06.709 20:58:31 -- common/autotest_common.sh@10 -- # set +x 00:27:06.709 20:58:31 -- nvmf/common.sh@470 -- # nvmfpid=2951462 00:27:06.709 20:58:31 -- nvmf/common.sh@471 -- # waitforlisten 2951462 00:27:06.709 20:58:31 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:06.709 20:58:31 -- common/autotest_common.sh@817 -- # '[' -z 2951462 ']' 00:27:06.709 20:58:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:06.709 20:58:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:06.709 20:58:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:06.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:06.709 20:58:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:06.709 20:58:31 -- common/autotest_common.sh@10 -- # set +x 00:27:06.709 [2024-04-24 20:58:31.235678] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:27:06.709 [2024-04-24 20:58:31.235744] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:06.709 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.709 [2024-04-24 20:58:31.319890] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.973 [2024-04-24 20:58:31.412516] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:06.973 [2024-04-24 20:58:31.412568] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:06.973 [2024-04-24 20:58:31.412577] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:06.974 [2024-04-24 20:58:31.412583] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:06.974 [2024-04-24 20:58:31.412589] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:06.974 [2024-04-24 20:58:31.412625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:07.573 20:58:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:07.573 20:58:32 -- common/autotest_common.sh@850 -- # return 0 00:27:07.573 20:58:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:07.573 20:58:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:07.573 20:58:32 -- common/autotest_common.sh@10 -- # set +x 00:27:07.573 20:58:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:07.573 20:58:32 -- target/dif.sh@139 -- # create_transport 00:27:07.573 20:58:32 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:07.573 20:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.573 20:58:32 -- common/autotest_common.sh@10 -- # set +x 00:27:07.573 [2024-04-24 20:58:32.156883] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:07.573 20:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.573 20:58:32 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:07.573 20:58:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:07.573 20:58:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:07.573 20:58:32 -- common/autotest_common.sh@10 -- # set +x 00:27:07.834 ************************************ 00:27:07.834 START TEST fio_dif_1_default 00:27:07.834 ************************************ 00:27:07.834 20:58:32 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:27:07.834 20:58:32 -- target/dif.sh@86 -- # create_subsystems 0 00:27:07.834 20:58:32 -- target/dif.sh@28 -- # local sub 00:27:07.834 20:58:32 -- target/dif.sh@30 -- # for sub in "$@" 00:27:07.834 20:58:32 -- target/dif.sh@31 -- # create_subsystem 0 00:27:07.834 20:58:32 -- target/dif.sh@18 -- # local sub_id=0 00:27:07.834 20:58:32 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:07.834 20:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.834 20:58:32 -- common/autotest_common.sh@10 -- # set +x 00:27:07.834 bdev_null0 00:27:07.834 20:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.834 20:58:32 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:07.834 20:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.834 20:58:32 -- common/autotest_common.sh@10 -- # set +x 00:27:07.834 20:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.834 20:58:32 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:07.834 20:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.834 20:58:32 -- common/autotest_common.sh@10 -- # set +x 00:27:07.834 20:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.834 20:58:32 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:07.834 20:58:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:07.834 20:58:32 -- common/autotest_common.sh@10 -- # set +x 00:27:07.834 [2024-04-24 20:58:32.361628] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.834 20:58:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:07.834 20:58:32 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:07.835 20:58:32 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:07.835 20:58:32 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:07.835 20:58:32 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:07.835 20:58:32 -- nvmf/common.sh@521 -- # config=() 00:27:07.835 20:58:32 -- nvmf/common.sh@521 -- # local subsystem config 00:27:07.835 20:58:32 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:07.835 20:58:32 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:07.835 20:58:32 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:07.835 { 00:27:07.835 "params": { 00:27:07.835 "name": "Nvme$subsystem", 00:27:07.835 "trtype": "$TEST_TRANSPORT", 00:27:07.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:07.835 "adrfam": "ipv4", 00:27:07.835 "trsvcid": "$NVMF_PORT", 00:27:07.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:07.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:07.835 "hdgst": ${hdgst:-false}, 00:27:07.835 "ddgst": ${ddgst:-false} 00:27:07.835 }, 00:27:07.835 "method": "bdev_nvme_attach_controller" 00:27:07.835 } 00:27:07.835 EOF 00:27:07.835 )") 00:27:07.835 20:58:32 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:07.835 20:58:32 -- target/dif.sh@82 -- # gen_fio_conf 00:27:07.835 20:58:32 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:07.835 20:58:32 -- target/dif.sh@54 -- # local file 00:27:07.835 20:58:32 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:07.835 20:58:32 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:07.835 20:58:32 -- target/dif.sh@56 -- # cat 00:27:07.835 20:58:32 -- common/autotest_common.sh@1327 -- # shift 00:27:07.835 20:58:32 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:07.835 20:58:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:07.835 20:58:32 -- nvmf/common.sh@543 -- # cat 00:27:07.835 20:58:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:07.835 20:58:32 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:07.835 20:58:32 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:07.835 20:58:32 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:07.835 20:58:32 -- target/dif.sh@72 -- # (( file <= files )) 00:27:07.835 20:58:32 -- nvmf/common.sh@545 -- # jq . 00:27:07.835 20:58:32 -- nvmf/common.sh@546 -- # IFS=, 00:27:07.835 20:58:32 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:07.835 "params": { 00:27:07.835 "name": "Nvme0", 00:27:07.835 "trtype": "tcp", 00:27:07.835 "traddr": "10.0.0.2", 00:27:07.835 "adrfam": "ipv4", 00:27:07.835 "trsvcid": "4420", 00:27:07.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:07.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:07.835 "hdgst": false, 00:27:07.835 "ddgst": false 00:27:07.835 }, 00:27:07.835 "method": "bdev_nvme_attach_controller" 00:27:07.835 }' 00:27:07.835 20:58:32 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:07.835 20:58:32 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:07.835 20:58:32 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:07.835 20:58:32 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:07.835 20:58:32 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:07.835 20:58:32 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:07.835 20:58:32 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:07.835 20:58:32 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:07.835 20:58:32 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:07.835 20:58:32 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:08.405 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:08.405 fio-3.35 00:27:08.405 Starting 1 thread 00:27:08.405 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.671 00:27:20.671 filename0: (groupid=0, jobs=1): err= 0: pid=2951998: Wed Apr 24 20:58:43 2024 00:27:20.671 read: IOPS=188, BW=752KiB/s (770kB/s)(7536KiB/10021msec) 00:27:20.671 slat (nsec): min=5319, max=84977, avg=6245.20, stdev=2304.11 00:27:20.671 clat (usec): min=583, max=43342, avg=21258.45, stdev=20292.14 00:27:20.671 lat (usec): min=591, max=43351, avg=21264.70, stdev=20292.09 00:27:20.671 clat percentiles (usec): 00:27:20.671 | 1.00th=[ 717], 5.00th=[ 881], 10.00th=[ 914], 20.00th=[ 930], 00:27:20.671 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[41157], 60.00th=[41157], 00:27:20.671 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42730], 00:27:20.671 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:27:20.671 | 99.99th=[43254] 00:27:20.671 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=752.00, stdev=28.43, samples=20 00:27:20.672 iops : min= 176, max= 192, avg=188.00, stdev= 7.11, samples=20 00:27:20.672 lat (usec) : 750=1.91%, 1000=42.89% 00:27:20.672 lat (msec) : 2=5.10%, 50=50.11% 00:27:20.672 cpu : usr=94.84%, sys=4.91%, ctx=14, majf=0, minf=209 00:27:20.672 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:20.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:20.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:20.672 issued rwts: total=1884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:20.672 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:20.672 00:27:20.672 Run status group 0 (all jobs): 00:27:20.672 READ: bw=752KiB/s (770kB/s), 752KiB/s-752KiB/s (770kB/s-770kB/s), io=7536KiB (7717kB), run=10021-10021msec 00:27:20.672 20:58:43 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:20.672 20:58:43 -- target/dif.sh@43 -- # local sub 00:27:20.672 20:58:43 -- target/dif.sh@45 -- # for sub in "$@" 00:27:20.672 20:58:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:20.672 20:58:43 -- target/dif.sh@36 -- # local sub_id=0 00:27:20.672 20:58:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:20.672 20:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.672 20:58:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.672 20:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.672 20:58:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:20.672 20:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.672 20:58:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.672 20:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.672 00:27:20.672 real 0m11.114s 00:27:20.672 user 0m17.825s 00:27:20.672 sys 0m0.898s 00:27:20.672 20:58:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:20.672 20:58:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.672 ************************************ 00:27:20.672 END TEST fio_dif_1_default 00:27:20.672 ************************************ 00:27:20.672 20:58:43 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:20.672 20:58:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:20.672 20:58:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:20.672 20:58:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.672 ************************************ 00:27:20.672 START TEST fio_dif_1_multi_subsystems 00:27:20.672 ************************************ 00:27:20.672 20:58:43 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:27:20.672 20:58:43 -- target/dif.sh@92 -- # local files=1 00:27:20.672 20:58:43 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:20.672 20:58:43 -- target/dif.sh@28 -- # local sub 00:27:20.672 20:58:43 -- target/dif.sh@30 -- # for sub in "$@" 00:27:20.672 20:58:43 -- target/dif.sh@31 -- # create_subsystem 0 00:27:20.672 20:58:43 -- target/dif.sh@18 -- # local sub_id=0 00:27:20.672 20:58:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:20.672 20:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.672 20:58:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.672 bdev_null0 00:27:20.672 20:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.672 20:58:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:20.672 20:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.672 20:58:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.672 20:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.672 20:58:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:20.672 20:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.672 20:58:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.672 20:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.672 20:58:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:20.672 20:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.672 20:58:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.672 [2024-04-24 20:58:43.657921] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.672 20:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.672 20:58:43 -- target/dif.sh@30 -- # for sub in "$@" 00:27:20.672 20:58:43 -- target/dif.sh@31 -- # create_subsystem 1 00:27:20.672 20:58:43 -- target/dif.sh@18 -- # local sub_id=1 00:27:20.672 20:58:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:20.672 20:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.672 20:58:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.672 bdev_null1 00:27:20.672 20:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.672 20:58:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:20.672 20:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.672 20:58:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.672 20:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.672 20:58:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:20.672 20:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.672 20:58:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.672 20:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.672 20:58:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:20.672 20:58:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:20.672 20:58:43 -- common/autotest_common.sh@10 -- # set +x 00:27:20.672 20:58:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:20.672 20:58:43 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:20.672 20:58:43 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:20.672 20:58:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:20.672 20:58:43 -- nvmf/common.sh@521 -- # config=() 00:27:20.672 20:58:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:20.672 20:58:43 -- nvmf/common.sh@521 -- # local subsystem config 00:27:20.672 20:58:43 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:20.672 20:58:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:20.672 20:58:43 -- target/dif.sh@82 -- # gen_fio_conf 00:27:20.672 20:58:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:20.672 { 00:27:20.672 "params": { 00:27:20.672 "name": "Nvme$subsystem", 00:27:20.672 "trtype": "$TEST_TRANSPORT", 00:27:20.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.672 "adrfam": "ipv4", 00:27:20.672 "trsvcid": "$NVMF_PORT", 00:27:20.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.672 "hdgst": ${hdgst:-false}, 00:27:20.672 "ddgst": ${ddgst:-false} 00:27:20.672 }, 00:27:20.672 "method": "bdev_nvme_attach_controller" 00:27:20.672 } 00:27:20.672 EOF 00:27:20.672 )") 00:27:20.672 20:58:43 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:20.672 20:58:43 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:20.672 20:58:43 -- target/dif.sh@54 -- # local file 00:27:20.672 20:58:43 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:20.672 20:58:43 -- target/dif.sh@56 -- # cat 00:27:20.672 20:58:43 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:20.672 20:58:43 -- common/autotest_common.sh@1327 -- # shift 00:27:20.672 20:58:43 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:20.672 20:58:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:20.672 20:58:43 -- nvmf/common.sh@543 -- # cat 00:27:20.672 20:58:43 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:20.672 20:58:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:20.672 20:58:43 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:20.672 20:58:43 -- target/dif.sh@72 -- # (( file <= files )) 00:27:20.672 20:58:43 -- target/dif.sh@73 -- # cat 00:27:20.672 20:58:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:20.672 20:58:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:20.672 20:58:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:20.672 { 00:27:20.672 "params": { 00:27:20.672 "name": "Nvme$subsystem", 00:27:20.672 "trtype": "$TEST_TRANSPORT", 00:27:20.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:20.672 "adrfam": "ipv4", 00:27:20.672 "trsvcid": "$NVMF_PORT", 00:27:20.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:20.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:20.672 "hdgst": ${hdgst:-false}, 00:27:20.672 "ddgst": ${ddgst:-false} 00:27:20.672 }, 00:27:20.672 "method": "bdev_nvme_attach_controller" 00:27:20.672 } 00:27:20.672 EOF 00:27:20.672 )") 00:27:20.672 20:58:43 -- target/dif.sh@72 -- # (( file++ )) 00:27:20.672 20:58:43 -- target/dif.sh@72 -- # (( file <= files )) 00:27:20.672 20:58:43 -- nvmf/common.sh@543 -- # cat 00:27:20.672 20:58:43 -- nvmf/common.sh@545 -- # jq . 00:27:20.672 20:58:43 -- nvmf/common.sh@546 -- # IFS=, 00:27:20.672 20:58:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:20.672 "params": { 00:27:20.672 "name": "Nvme0", 00:27:20.672 "trtype": "tcp", 00:27:20.672 "traddr": "10.0.0.2", 00:27:20.672 "adrfam": "ipv4", 00:27:20.672 "trsvcid": "4420", 00:27:20.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:20.672 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:20.672 "hdgst": false, 00:27:20.672 "ddgst": false 00:27:20.672 }, 00:27:20.672 "method": "bdev_nvme_attach_controller" 00:27:20.672 },{ 00:27:20.672 "params": { 00:27:20.672 "name": "Nvme1", 00:27:20.672 "trtype": "tcp", 00:27:20.672 "traddr": "10.0.0.2", 00:27:20.672 "adrfam": "ipv4", 00:27:20.672 "trsvcid": "4420", 00:27:20.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:20.672 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:20.672 "hdgst": false, 00:27:20.672 "ddgst": false 00:27:20.672 }, 00:27:20.672 "method": "bdev_nvme_attach_controller" 00:27:20.672 }' 00:27:20.672 20:58:43 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:20.672 20:58:43 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:20.672 20:58:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:20.673 20:58:43 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:20.673 20:58:43 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:20.673 20:58:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:20.673 20:58:43 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:20.673 20:58:43 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:20.673 20:58:43 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:20.673 20:58:43 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:20.673 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:20.673 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:20.673 fio-3.35 00:27:20.673 Starting 2 threads 00:27:20.673 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.668 00:27:30.668 filename0: (groupid=0, jobs=1): err= 0: pid=2954236: Wed Apr 24 20:58:54 2024 00:27:30.668 read: IOPS=186, BW=747KiB/s (765kB/s)(7488KiB/10028msec) 00:27:30.668 slat (nsec): min=7750, max=31421, avg=8014.32, stdev=864.06 00:27:30.668 clat (usec): min=742, max=43767, avg=21405.11, stdev=20382.72 00:27:30.668 lat (usec): min=750, max=43799, avg=21413.13, stdev=20382.69 00:27:30.668 clat percentiles (usec): 00:27:30.668 | 1.00th=[ 865], 5.00th=[ 906], 10.00th=[ 922], 20.00th=[ 938], 00:27:30.668 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[41157], 60.00th=[41157], 00:27:30.668 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:27:30.668 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:27:30.668 | 99.99th=[43779] 00:27:30.668 bw ( KiB/s): min= 672, max= 768, per=66.12%, avg=747.20, stdev=31.62, samples=20 00:27:30.669 iops : min= 168, max= 192, avg=186.80, stdev= 7.90, samples=20 00:27:30.669 lat (usec) : 750=0.11%, 1000=47.70% 00:27:30.669 lat (msec) : 2=1.98%, 50=50.21% 00:27:30.669 cpu : usr=96.02%, sys=3.76%, ctx=9, majf=0, minf=138 00:27:30.669 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:30.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.669 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.669 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:30.669 filename1: (groupid=0, jobs=1): err= 0: pid=2954237: Wed Apr 24 20:58:54 2024 00:27:30.669 read: IOPS=96, BW=384KiB/s (393kB/s)(3856KiB/10041msec) 00:27:30.669 slat (nsec): min=7746, max=32215, avg=8038.66, stdev=1073.76 00:27:30.669 clat (usec): min=40807, max=42986, avg=41641.01, stdev=509.95 00:27:30.669 lat (usec): min=40815, max=42996, avg=41649.05, stdev=510.05 00:27:30.669 clat percentiles (usec): 00:27:30.669 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:27:30.669 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:27:30.669 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:27:30.669 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:27:30.669 | 99.99th=[42730] 00:27:30.669 bw ( KiB/s): min= 352, max= 416, per=33.99%, avg=384.00, stdev=14.68, samples=20 00:27:30.669 iops : min= 88, max= 104, avg=96.00, stdev= 3.67, samples=20 00:27:30.669 lat (msec) : 50=100.00% 00:27:30.669 cpu : usr=96.32%, sys=3.46%, ctx=13, majf=0, minf=184 00:27:30.669 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:30.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.669 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.669 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:30.669 00:27:30.669 Run status group 0 (all jobs): 00:27:30.669 READ: bw=1130KiB/s (1157kB/s), 384KiB/s-747KiB/s (393kB/s-765kB/s), io=11.1MiB (11.6MB), run=10028-10041msec 00:27:30.669 20:58:54 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:30.669 20:58:54 -- target/dif.sh@43 -- # local sub 00:27:30.669 20:58:54 -- target/dif.sh@45 -- # for sub in "$@" 00:27:30.669 20:58:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:30.669 20:58:54 -- target/dif.sh@36 -- # local sub_id=0 00:27:30.669 20:58:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:30.669 20:58:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.669 20:58:54 -- common/autotest_common.sh@10 -- # set +x 00:27:30.669 20:58:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.669 20:58:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:30.669 20:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.669 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:27:30.669 20:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.669 20:58:55 -- target/dif.sh@45 -- # for sub in "$@" 00:27:30.669 20:58:55 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:30.669 20:58:55 -- target/dif.sh@36 -- # local sub_id=1 00:27:30.669 20:58:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:30.669 20:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.669 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:27:30.669 20:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.669 20:58:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:30.669 20:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.669 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:27:30.669 20:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.669 00:27:30.669 real 0m11.413s 00:27:30.669 user 0m31.735s 00:27:30.669 sys 0m1.034s 00:27:30.669 20:58:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:30.669 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:27:30.669 ************************************ 00:27:30.669 END TEST fio_dif_1_multi_subsystems 00:27:30.669 ************************************ 00:27:30.669 20:58:55 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:30.669 20:58:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:30.669 20:58:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:30.669 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:27:30.669 ************************************ 00:27:30.669 START TEST fio_dif_rand_params 00:27:30.669 ************************************ 00:27:30.669 20:58:55 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:27:30.669 20:58:55 -- target/dif.sh@100 -- # local NULL_DIF 00:27:30.669 20:58:55 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:30.669 20:58:55 -- target/dif.sh@103 -- # NULL_DIF=3 00:27:30.669 20:58:55 -- target/dif.sh@103 -- # bs=128k 00:27:30.669 20:58:55 -- target/dif.sh@103 -- # numjobs=3 00:27:30.669 20:58:55 -- target/dif.sh@103 -- # iodepth=3 00:27:30.669 20:58:55 -- target/dif.sh@103 -- # runtime=5 00:27:30.669 20:58:55 -- target/dif.sh@105 -- # create_subsystems 0 00:27:30.669 20:58:55 -- target/dif.sh@28 -- # local sub 00:27:30.669 20:58:55 -- target/dif.sh@30 -- # for sub in "$@" 00:27:30.669 20:58:55 -- target/dif.sh@31 -- # create_subsystem 0 00:27:30.669 20:58:55 -- target/dif.sh@18 -- # local sub_id=0 00:27:30.669 20:58:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:30.669 20:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.669 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:27:30.669 bdev_null0 00:27:30.669 20:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.669 20:58:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:30.669 20:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.669 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:27:30.669 20:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.669 20:58:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:30.669 20:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.669 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:27:30.669 20:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.669 20:58:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:30.669 20:58:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:30.669 20:58:55 -- common/autotest_common.sh@10 -- # set +x 00:27:30.669 [2024-04-24 20:58:55.268571] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.669 20:58:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:30.669 20:58:55 -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:30.669 20:58:55 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:30.669 20:58:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:30.669 20:58:55 -- nvmf/common.sh@521 -- # config=() 00:27:30.669 20:58:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:30.669 20:58:55 -- nvmf/common.sh@521 -- # local subsystem config 00:27:30.669 20:58:55 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:30.669 20:58:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:30.669 20:58:55 -- target/dif.sh@82 -- # gen_fio_conf 00:27:30.669 20:58:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:30.669 { 00:27:30.669 "params": { 00:27:30.669 "name": "Nvme$subsystem", 00:27:30.669 "trtype": "$TEST_TRANSPORT", 00:27:30.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:30.669 "adrfam": "ipv4", 00:27:30.669 "trsvcid": "$NVMF_PORT", 00:27:30.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:30.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:30.669 "hdgst": ${hdgst:-false}, 00:27:30.669 "ddgst": ${ddgst:-false} 00:27:30.669 }, 00:27:30.669 "method": "bdev_nvme_attach_controller" 00:27:30.669 } 00:27:30.669 EOF 00:27:30.669 )") 00:27:30.669 20:58:55 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:30.669 20:58:55 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:30.669 20:58:55 -- target/dif.sh@54 -- # local file 00:27:30.669 20:58:55 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:30.669 20:58:55 -- target/dif.sh@56 -- # cat 00:27:30.669 20:58:55 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:30.669 20:58:55 -- common/autotest_common.sh@1327 -- # shift 00:27:30.669 20:58:55 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:30.669 20:58:55 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.669 20:58:55 -- nvmf/common.sh@543 -- # cat 00:27:30.669 20:58:55 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:30.669 20:58:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:30.669 20:58:55 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:30.669 20:58:55 -- target/dif.sh@72 -- # (( file <= files )) 00:27:30.669 20:58:55 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:30.669 20:58:55 -- nvmf/common.sh@545 -- # jq . 00:27:30.669 20:58:55 -- nvmf/common.sh@546 -- # IFS=, 00:27:30.669 20:58:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:30.669 "params": { 00:27:30.669 "name": "Nvme0", 00:27:30.669 "trtype": "tcp", 00:27:30.669 "traddr": "10.0.0.2", 00:27:30.669 "adrfam": "ipv4", 00:27:30.669 "trsvcid": "4420", 00:27:30.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:30.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:30.669 "hdgst": false, 00:27:30.669 "ddgst": false 00:27:30.669 }, 00:27:30.669 "method": "bdev_nvme_attach_controller" 00:27:30.669 }' 00:27:30.951 20:58:55 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:30.951 20:58:55 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:30.951 20:58:55 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.951 20:58:55 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:30.951 20:58:55 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:30.951 20:58:55 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:30.951 20:58:55 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:30.951 20:58:55 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:30.951 20:58:55 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:30.951 20:58:55 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.214 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:31.214 ... 00:27:31.214 fio-3.35 00:27:31.214 Starting 3 threads 00:27:31.214 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.799 00:27:37.799 filename0: (groupid=0, jobs=1): err= 0: pid=2956732: Wed Apr 24 20:59:01 2024 00:27:37.799 read: IOPS=219, BW=27.4MiB/s (28.7MB/s)(138MiB/5044msec) 00:27:37.799 slat (nsec): min=7743, max=29308, avg=8718.63, stdev=1066.98 00:27:37.799 clat (usec): min=5488, max=53272, avg=13645.17, stdev=9355.14 00:27:37.799 lat (usec): min=5497, max=53280, avg=13653.89, stdev=9355.19 00:27:37.799 clat percentiles (usec): 00:27:37.799 | 1.00th=[ 5932], 5.00th=[ 6915], 10.00th=[ 7767], 20.00th=[ 8848], 00:27:37.799 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11863], 60.00th=[12518], 00:27:37.799 | 70.00th=[13435], 80.00th=[14484], 90.00th=[15664], 95.00th=[47449], 00:27:37.799 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:27:37.799 | 99.99th=[53216] 00:27:37.799 bw ( KiB/s): min=20224, max=33536, per=32.58%, avg=28236.80, stdev=4163.09, samples=10 00:27:37.800 iops : min= 158, max= 262, avg=220.60, stdev=32.52, samples=10 00:27:37.800 lat (msec) : 10=26.79%, 20=67.60%, 50=2.35%, 100=3.26% 00:27:37.800 cpu : usr=95.16%, sys=4.24%, ctx=285, majf=0, minf=76 00:27:37.800 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.800 issued rwts: total=1105,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.800 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:37.800 filename0: (groupid=0, jobs=1): err= 0: pid=2956733: Wed Apr 24 20:59:01 2024 00:27:37.800 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(137MiB/5044msec) 00:27:37.800 slat (nsec): min=7770, max=50159, avg=8528.96, stdev=1590.41 00:27:37.800 clat (usec): min=5523, max=88779, avg=13781.68, stdev=8311.15 00:27:37.800 lat (usec): min=5531, max=88788, avg=13790.21, stdev=8311.25 00:27:37.800 clat percentiles (usec): 00:27:37.800 | 1.00th=[ 5997], 5.00th=[ 7242], 10.00th=[ 8356], 20.00th=[ 9634], 00:27:37.800 | 30.00th=[10814], 40.00th=[11731], 50.00th=[12780], 60.00th=[13698], 00:27:37.800 | 70.00th=[14484], 80.00th=[15270], 90.00th=[16188], 95.00th=[17171], 00:27:37.800 | 99.00th=[50594], 99.50th=[53216], 99.90th=[87557], 99.95th=[88605], 00:27:37.800 | 99.99th=[88605] 00:27:37.800 bw ( KiB/s): min=23040, max=34560, per=32.26%, avg=27955.20, stdev=3956.38, samples=10 00:27:37.800 iops : min= 180, max= 270, avg=218.40, stdev=30.91, samples=10 00:27:37.800 lat (msec) : 10=22.76%, 20=73.49%, 50=2.47%, 100=1.28% 00:27:37.800 cpu : usr=95.80%, sys=3.95%, ctx=10, majf=0, minf=158 00:27:37.800 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.800 issued rwts: total=1094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.800 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:37.800 filename0: (groupid=0, jobs=1): err= 0: pid=2956734: Wed Apr 24 20:59:01 2024 00:27:37.800 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(152MiB/5004msec) 00:27:37.800 slat (nsec): min=7778, max=29835, avg=8742.76, stdev=1052.90 00:27:37.800 clat (usec): min=5249, max=93258, avg=12330.59, stdev=11068.39 00:27:37.800 lat (usec): min=5258, max=93266, avg=12339.34, stdev=11068.49 00:27:37.800 clat percentiles (usec): 00:27:37.800 | 1.00th=[ 5866], 5.00th=[ 6718], 10.00th=[ 7242], 20.00th=[ 8225], 00:27:37.800 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:27:37.800 | 70.00th=[10159], 80.00th=[10552], 90.00th=[11731], 95.00th=[49021], 00:27:37.800 | 99.00th=[52167], 99.50th=[52167], 99.90th=[53740], 99.95th=[92799], 00:27:37.800 | 99.99th=[92799] 00:27:37.800 bw ( KiB/s): min=21248, max=41472, per=35.84%, avg=31059.20, stdev=7550.92, samples=10 00:27:37.800 iops : min= 166, max= 324, avg=242.60, stdev=58.98, samples=10 00:27:37.800 lat (msec) : 10=66.04%, 20=26.40%, 50=3.37%, 100=4.19% 00:27:37.800 cpu : usr=90.37%, sys=5.92%, ctx=308, majf=0, minf=68 00:27:37.800 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:37.800 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.800 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.800 issued rwts: total=1216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.800 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:37.800 00:27:37.800 Run status group 0 (all jobs): 00:27:37.800 READ: bw=84.6MiB/s (88.7MB/s), 27.1MiB/s-30.4MiB/s (28.4MB/s-31.9MB/s), io=427MiB (448MB), run=5004-5044msec 00:27:37.800 20:59:01 -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:37.800 20:59:01 -- target/dif.sh@43 -- # local sub 00:27:37.800 20:59:01 -- target/dif.sh@45 -- # for sub in "$@" 00:27:37.800 20:59:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:37.800 20:59:01 -- target/dif.sh@36 -- # local sub_id=0 00:27:37.800 20:59:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@109 -- # NULL_DIF=2 00:27:37.800 20:59:01 -- target/dif.sh@109 -- # bs=4k 00:27:37.800 20:59:01 -- target/dif.sh@109 -- # numjobs=8 00:27:37.800 20:59:01 -- target/dif.sh@109 -- # iodepth=16 00:27:37.800 20:59:01 -- target/dif.sh@109 -- # runtime= 00:27:37.800 20:59:01 -- target/dif.sh@109 -- # files=2 00:27:37.800 20:59:01 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:37.800 20:59:01 -- target/dif.sh@28 -- # local sub 00:27:37.800 20:59:01 -- target/dif.sh@30 -- # for sub in "$@" 00:27:37.800 20:59:01 -- target/dif.sh@31 -- # create_subsystem 0 00:27:37.800 20:59:01 -- target/dif.sh@18 -- # local sub_id=0 00:27:37.800 20:59:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 bdev_null0 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 [2024-04-24 20:59:01.394435] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@30 -- # for sub in "$@" 00:27:37.800 20:59:01 -- target/dif.sh@31 -- # create_subsystem 1 00:27:37.800 20:59:01 -- target/dif.sh@18 -- # local sub_id=1 00:27:37.800 20:59:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 bdev_null1 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@30 -- # for sub in "$@" 00:27:37.800 20:59:01 -- target/dif.sh@31 -- # create_subsystem 2 00:27:37.800 20:59:01 -- target/dif.sh@18 -- # local sub_id=2 00:27:37.800 20:59:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 bdev_null2 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:37.800 20:59:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.800 20:59:01 -- common/autotest_common.sh@10 -- # set +x 00:27:37.800 20:59:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.800 20:59:01 -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:37.800 20:59:01 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:37.800 20:59:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:37.800 20:59:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:37.800 20:59:01 -- nvmf/common.sh@521 -- # config=() 00:27:37.800 20:59:01 -- nvmf/common.sh@521 -- # local subsystem config 00:27:37.800 20:59:01 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:37.800 20:59:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:37.800 20:59:01 -- target/dif.sh@82 -- # gen_fio_conf 00:27:37.800 20:59:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:37.800 20:59:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:37.800 { 00:27:37.800 "params": { 00:27:37.800 "name": "Nvme$subsystem", 00:27:37.800 "trtype": "$TEST_TRANSPORT", 00:27:37.800 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.800 "adrfam": "ipv4", 00:27:37.800 "trsvcid": "$NVMF_PORT", 00:27:37.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.800 "hdgst": ${hdgst:-false}, 00:27:37.800 "ddgst": ${ddgst:-false} 00:27:37.800 }, 00:27:37.800 "method": "bdev_nvme_attach_controller" 00:27:37.800 } 00:27:37.801 EOF 00:27:37.801 )") 00:27:37.801 20:59:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:37.801 20:59:01 -- target/dif.sh@54 -- # local file 00:27:37.801 20:59:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:37.801 20:59:01 -- target/dif.sh@56 -- # cat 00:27:37.801 20:59:01 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:37.801 20:59:01 -- common/autotest_common.sh@1327 -- # shift 00:27:37.801 20:59:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:37.801 20:59:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.801 20:59:01 -- nvmf/common.sh@543 -- # cat 00:27:37.801 20:59:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:37.801 20:59:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:37.801 20:59:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:37.801 20:59:01 -- target/dif.sh@72 -- # (( file <= files )) 00:27:37.801 20:59:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:37.801 20:59:01 -- target/dif.sh@73 -- # cat 00:27:37.801 20:59:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:37.801 20:59:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:37.801 { 00:27:37.801 "params": { 00:27:37.801 "name": "Nvme$subsystem", 00:27:37.801 "trtype": "$TEST_TRANSPORT", 00:27:37.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.801 "adrfam": "ipv4", 00:27:37.801 "trsvcid": "$NVMF_PORT", 00:27:37.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.801 "hdgst": ${hdgst:-false}, 00:27:37.801 "ddgst": ${ddgst:-false} 00:27:37.801 }, 00:27:37.801 "method": "bdev_nvme_attach_controller" 00:27:37.801 } 00:27:37.801 EOF 00:27:37.801 )") 00:27:37.801 20:59:01 -- target/dif.sh@72 -- # (( file++ )) 00:27:37.801 20:59:01 -- target/dif.sh@72 -- # (( file <= files )) 00:27:37.801 20:59:01 -- target/dif.sh@73 -- # cat 00:27:37.801 20:59:01 -- nvmf/common.sh@543 -- # cat 00:27:37.801 20:59:01 -- target/dif.sh@72 -- # (( file++ )) 00:27:37.801 20:59:01 -- target/dif.sh@72 -- # (( file <= files )) 00:27:37.801 20:59:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:37.801 20:59:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:37.801 { 00:27:37.801 "params": { 00:27:37.801 "name": "Nvme$subsystem", 00:27:37.801 "trtype": "$TEST_TRANSPORT", 00:27:37.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.801 "adrfam": "ipv4", 00:27:37.801 "trsvcid": "$NVMF_PORT", 00:27:37.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.801 "hdgst": ${hdgst:-false}, 00:27:37.801 "ddgst": ${ddgst:-false} 00:27:37.801 }, 00:27:37.801 "method": "bdev_nvme_attach_controller" 00:27:37.801 } 00:27:37.801 EOF 00:27:37.801 )") 00:27:37.801 20:59:01 -- nvmf/common.sh@543 -- # cat 00:27:37.801 20:59:01 -- nvmf/common.sh@545 -- # jq . 00:27:37.801 20:59:01 -- nvmf/common.sh@546 -- # IFS=, 00:27:37.801 20:59:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:37.801 "params": { 00:27:37.801 "name": "Nvme0", 00:27:37.801 "trtype": "tcp", 00:27:37.801 "traddr": "10.0.0.2", 00:27:37.801 "adrfam": "ipv4", 00:27:37.801 "trsvcid": "4420", 00:27:37.801 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:37.801 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:37.801 "hdgst": false, 00:27:37.801 "ddgst": false 00:27:37.801 }, 00:27:37.801 "method": "bdev_nvme_attach_controller" 00:27:37.801 },{ 00:27:37.801 "params": { 00:27:37.801 "name": "Nvme1", 00:27:37.801 "trtype": "tcp", 00:27:37.801 "traddr": "10.0.0.2", 00:27:37.801 "adrfam": "ipv4", 00:27:37.801 "trsvcid": "4420", 00:27:37.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:37.801 "hdgst": false, 00:27:37.801 "ddgst": false 00:27:37.801 }, 00:27:37.801 "method": "bdev_nvme_attach_controller" 00:27:37.801 },{ 00:27:37.801 "params": { 00:27:37.801 "name": "Nvme2", 00:27:37.801 "trtype": "tcp", 00:27:37.801 "traddr": "10.0.0.2", 00:27:37.801 "adrfam": "ipv4", 00:27:37.801 "trsvcid": "4420", 00:27:37.801 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:37.801 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:37.801 "hdgst": false, 00:27:37.801 "ddgst": false 00:27:37.801 }, 00:27:37.801 "method": "bdev_nvme_attach_controller" 00:27:37.801 }' 00:27:37.801 20:59:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:37.801 20:59:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:37.801 20:59:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.801 20:59:01 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:37.801 20:59:01 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:37.801 20:59:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:37.801 20:59:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:37.801 20:59:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:37.801 20:59:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:37.801 20:59:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:37.801 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:37.801 ... 00:27:37.801 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:37.801 ... 00:27:37.801 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:37.801 ... 00:27:37.801 fio-3.35 00:27:37.801 Starting 24 threads 00:27:37.801 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.040 00:27:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=2958095: Wed Apr 24 20:59:12 2024 00:27:50.040 read: IOPS=496, BW=1985KiB/s (2033kB/s)(19.4MiB/10025msec) 00:27:50.040 slat (nsec): min=7796, max=86173, avg=17717.36, stdev=14555.82 00:27:50.040 clat (usec): min=2605, max=42679, avg=32093.21, stdev=3533.84 00:27:50.040 lat (usec): min=2624, max=42689, avg=32110.93, stdev=3534.09 00:27:50.040 clat percentiles (usec): 00:27:50.040 | 1.00th=[ 5276], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:27:50.040 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.040 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33162], 95.00th=[33424], 00:27:50.040 | 99.00th=[34341], 99.50th=[40633], 99.90th=[42206], 99.95th=[42730], 00:27:50.040 | 99.99th=[42730] 00:27:50.040 bw ( KiB/s): min= 1920, max= 2436, per=4.21%, avg=1984.20, stdev=121.86, samples=20 00:27:50.040 iops : min= 480, max= 609, avg=496.05, stdev=30.46, samples=20 00:27:50.040 lat (msec) : 4=0.18%, 10=1.39%, 20=0.04%, 50=98.39% 00:27:50.040 cpu : usr=99.08%, sys=0.63%, ctx=30, majf=0, minf=73 00:27:50.040 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:27:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=2958096: Wed Apr 24 20:59:12 2024 00:27:50.040 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10022msec) 00:27:50.040 slat (nsec): min=7832, max=99892, avg=32529.90, stdev=16664.81 00:27:50.040 clat (usec): min=20732, max=59411, avg=32321.07, stdev=2110.25 00:27:50.040 lat (usec): min=20744, max=59437, avg=32353.60, stdev=2109.88 00:27:50.040 clat percentiles (usec): 00:27:50.040 | 1.00th=[25035], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:27:50.040 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:27:50.040 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[32900], 00:27:50.040 | 99.00th=[33817], 99.50th=[47973], 99.90th=[59507], 99.95th=[59507], 00:27:50.040 | 99.99th=[59507] 00:27:50.040 bw ( KiB/s): min= 1840, max= 2048, per=4.16%, avg=1962.95, stdev=69.14, samples=19 00:27:50.040 iops : min= 460, max= 512, avg=490.74, stdev=17.28, samples=19 00:27:50.040 lat (msec) : 50=99.51%, 100=0.49% 00:27:50.040 cpu : usr=98.99%, sys=0.70%, ctx=70, majf=0, minf=45 00:27:50.040 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=2958097: Wed Apr 24 20:59:12 2024 00:27:50.040 read: IOPS=494, BW=1978KiB/s (2025kB/s)(19.4MiB/10024msec) 00:27:50.040 slat (usec): min=7, max=100, avg=29.15, stdev=17.94 00:27:50.040 clat (usec): min=20615, max=55430, avg=32127.13, stdev=2059.96 00:27:50.040 lat (usec): min=20624, max=55456, avg=32156.27, stdev=2061.48 00:27:50.040 clat percentiles (usec): 00:27:50.040 | 1.00th=[22152], 5.00th=[31589], 10.00th=[31851], 20.00th=[32113], 00:27:50.040 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:27:50.040 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.040 | 99.00th=[34341], 99.50th=[43779], 99.90th=[48497], 99.95th=[48497], 00:27:50.040 | 99.99th=[55313] 00:27:50.040 bw ( KiB/s): min= 1920, max= 2144, per=4.19%, avg=1978.95, stdev=74.10, samples=19 00:27:50.040 iops : min= 480, max= 536, avg=494.74, stdev=18.53, samples=19 00:27:50.040 lat (msec) : 50=99.96%, 100=0.04% 00:27:50.040 cpu : usr=98.68%, sys=0.96%, ctx=56, majf=0, minf=45 00:27:50.040 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:27:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 issued rwts: total=4956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=2958098: Wed Apr 24 20:59:12 2024 00:27:50.040 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10006msec) 00:27:50.040 slat (nsec): min=6274, max=96069, avg=32637.82, stdev=16226.96 00:27:50.040 clat (usec): min=10952, max=52541, avg=32278.32, stdev=1768.24 00:27:50.040 lat (usec): min=10973, max=52558, avg=32310.95, stdev=1768.15 00:27:50.040 clat percentiles (usec): 00:27:50.040 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:27:50.040 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:27:50.040 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[32900], 00:27:50.040 | 99.00th=[33817], 99.50th=[34341], 99.90th=[52691], 99.95th=[52691], 00:27:50.040 | 99.99th=[52691] 00:27:50.040 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1960.42, stdev=74.55, samples=19 00:27:50.040 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:27:50.040 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:27:50.040 cpu : usr=99.09%, sys=0.61%, ctx=10, majf=0, minf=54 00:27:50.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=2958099: Wed Apr 24 20:59:12 2024 00:27:50.040 read: IOPS=500, BW=2002KiB/s (2050kB/s)(19.6MiB/10004msec) 00:27:50.040 slat (nsec): min=2943, max=34784, avg=7796.85, stdev=3191.62 00:27:50.040 clat (usec): min=2375, max=41344, avg=31888.58, stdev=4079.88 00:27:50.040 lat (usec): min=2382, max=41374, avg=31896.37, stdev=4080.31 00:27:50.040 clat percentiles (usec): 00:27:50.040 | 1.00th=[ 4359], 5.00th=[31851], 10.00th=[32113], 20.00th=[32375], 00:27:50.040 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.040 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.040 | 99.00th=[33817], 99.50th=[34341], 99.90th=[41157], 99.95th=[41157], 00:27:50.040 | 99.99th=[41157] 00:27:50.040 bw ( KiB/s): min= 1920, max= 2688, per=4.24%, avg=2000.84, stdev=177.01, samples=19 00:27:50.040 iops : min= 480, max= 672, avg=500.21, stdev=44.25, samples=19 00:27:50.040 lat (msec) : 4=0.82%, 10=1.10%, 20=0.64%, 50=97.44% 00:27:50.040 cpu : usr=98.32%, sys=1.02%, ctx=188, majf=0, minf=93 00:27:50.040 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=2958100: Wed Apr 24 20:59:12 2024 00:27:50.040 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10005msec) 00:27:50.040 slat (nsec): min=7787, max=97321, avg=18631.07, stdev=15745.57 00:27:50.040 clat (usec): min=24591, max=63030, avg=32555.74, stdev=1838.39 00:27:50.040 lat (usec): min=24615, max=63052, avg=32574.37, stdev=1836.71 00:27:50.040 clat percentiles (usec): 00:27:50.040 | 1.00th=[31589], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:27:50.040 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.040 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.040 | 99.00th=[34341], 99.50th=[34341], 99.90th=[63177], 99.95th=[63177], 00:27:50.040 | 99.99th=[63177] 00:27:50.040 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1953.42, stdev=71.57, samples=19 00:27:50.040 iops : min= 448, max= 512, avg=488.32, stdev=17.84, samples=19 00:27:50.040 lat (msec) : 50=99.67%, 100=0.33% 00:27:50.040 cpu : usr=99.20%, sys=0.52%, ctx=12, majf=0, minf=63 00:27:50.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=2958101: Wed Apr 24 20:59:12 2024 00:27:50.040 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10009msec) 00:27:50.040 slat (nsec): min=6684, max=59210, avg=9986.20, stdev=5173.13 00:27:50.040 clat (usec): min=19365, max=45718, avg=32518.19, stdev=1072.31 00:27:50.040 lat (usec): min=19385, max=45737, avg=32528.18, stdev=1072.32 00:27:50.040 clat percentiles (usec): 00:27:50.040 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:27:50.040 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.040 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:27:50.040 | 99.00th=[33817], 99.50th=[34866], 99.90th=[43779], 99.95th=[43779], 00:27:50.040 | 99.99th=[45876] 00:27:50.040 bw ( KiB/s): min= 1920, max= 2048, per=4.15%, avg=1960.16, stdev=60.74, samples=19 00:27:50.040 iops : min= 480, max= 512, avg=490.00, stdev=15.13, samples=19 00:27:50.040 lat (msec) : 20=0.29%, 50=99.71% 00:27:50.040 cpu : usr=99.14%, sys=0.58%, ctx=9, majf=0, minf=79 00:27:50.040 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.040 filename0: (groupid=0, jobs=1): err= 0: pid=2958102: Wed Apr 24 20:59:12 2024 00:27:50.040 read: IOPS=491, BW=1965KiB/s (2013kB/s)(19.2MiB/10001msec) 00:27:50.040 slat (nsec): min=6012, max=85752, avg=19018.56, stdev=12960.72 00:27:50.040 clat (usec): min=14624, max=82669, avg=32382.13, stdev=2923.17 00:27:50.040 lat (usec): min=14632, max=82685, avg=32401.14, stdev=2923.53 00:27:50.040 clat percentiles (usec): 00:27:50.040 | 1.00th=[21627], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:27:50.040 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.040 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:27:50.040 | 99.00th=[41157], 99.50th=[49546], 99.90th=[67634], 99.95th=[67634], 00:27:50.040 | 99.99th=[82314] 00:27:50.040 bw ( KiB/s): min= 1760, max= 2096, per=4.16%, avg=1961.05, stdev=82.62, samples=19 00:27:50.040 iops : min= 440, max= 524, avg=490.26, stdev=20.66, samples=19 00:27:50.040 lat (msec) : 20=0.49%, 50=99.02%, 100=0.49% 00:27:50.040 cpu : usr=98.49%, sys=0.96%, ctx=92, majf=0, minf=61 00:27:50.040 IO depths : 1=5.6%, 2=11.4%, 4=23.4%, 8=52.4%, 16=7.1%, 32=0.0%, >=64=0.0% 00:27:50.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.040 issued rwts: total=4914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.040 filename1: (groupid=0, jobs=1): err= 0: pid=2958103: Wed Apr 24 20:59:12 2024 00:27:50.040 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10006msec) 00:27:50.040 slat (nsec): min=7019, max=87920, avg=32140.85, stdev=15398.06 00:27:50.040 clat (usec): min=11338, max=51777, avg=32299.98, stdev=1753.98 00:27:50.040 lat (usec): min=11356, max=51794, avg=32332.12, stdev=1753.73 00:27:50.040 clat percentiles (usec): 00:27:50.040 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:27:50.041 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:27:50.041 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.041 | 99.00th=[33817], 99.50th=[34341], 99.90th=[51643], 99.95th=[51643], 00:27:50.041 | 99.99th=[51643] 00:27:50.041 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1960.58, stdev=74.17, samples=19 00:27:50.041 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:27:50.041 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:27:50.041 cpu : usr=98.58%, sys=0.84%, ctx=85, majf=0, minf=44 00:27:50.041 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=2958104: Wed Apr 24 20:59:12 2024 00:27:50.041 read: IOPS=490, BW=1960KiB/s (2008kB/s)(19.2MiB/10022msec) 00:27:50.041 slat (nsec): min=7941, max=75028, avg=22924.65, stdev=12525.47 00:27:50.041 clat (usec): min=22606, max=45509, avg=32430.83, stdev=1086.06 00:27:50.041 lat (usec): min=22614, max=45530, avg=32453.75, stdev=1085.81 00:27:50.041 clat percentiles (usec): 00:27:50.041 | 1.00th=[31065], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:27:50.041 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.041 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.041 | 99.00th=[34341], 99.50th=[40109], 99.90th=[45351], 99.95th=[45351], 00:27:50.041 | 99.99th=[45351] 00:27:50.041 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1958.35, stdev=72.88, samples=20 00:27:50.041 iops : min= 448, max= 512, avg=489.55, stdev=18.31, samples=20 00:27:50.041 lat (msec) : 50=100.00% 00:27:50.041 cpu : usr=99.13%, sys=0.58%, ctx=14, majf=0, minf=55 00:27:50.041 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=2958105: Wed Apr 24 20:59:12 2024 00:27:50.041 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10005msec) 00:27:50.041 slat (nsec): min=7812, max=56400, avg=16255.83, stdev=9359.69 00:27:50.041 clat (usec): min=29880, max=59491, avg=32559.23, stdev=1594.47 00:27:50.041 lat (usec): min=29888, max=59518, avg=32575.49, stdev=1594.08 00:27:50.041 clat percentiles (usec): 00:27:50.041 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:27:50.041 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.041 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.041 | 99.00th=[33817], 99.50th=[34866], 99.90th=[59507], 99.95th=[59507], 00:27:50.041 | 99.99th=[59507] 00:27:50.041 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1953.68, stdev=71.93, samples=19 00:27:50.041 iops : min= 448, max= 512, avg=488.42, stdev=17.98, samples=19 00:27:50.041 lat (msec) : 50=99.67%, 100=0.33% 00:27:50.041 cpu : usr=99.22%, sys=0.51%, ctx=9, majf=0, minf=59 00:27:50.041 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=2958107: Wed Apr 24 20:59:12 2024 00:27:50.041 read: IOPS=488, BW=1956KiB/s (2003kB/s)(19.1MiB/10013msec) 00:27:50.041 slat (nsec): min=7769, max=84639, avg=19293.72, stdev=13578.10 00:27:50.041 clat (usec): min=20787, max=87647, avg=32577.91, stdev=2856.41 00:27:50.041 lat (usec): min=20796, max=87680, avg=32597.21, stdev=2856.10 00:27:50.041 clat percentiles (usec): 00:27:50.041 | 1.00th=[23200], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:27:50.041 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.041 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:27:50.041 | 99.00th=[42206], 99.50th=[43779], 99.90th=[70779], 99.95th=[70779], 00:27:50.041 | 99.99th=[87557] 00:27:50.041 bw ( KiB/s): min= 1664, max= 2048, per=4.14%, avg=1953.68, stdev=93.89, samples=19 00:27:50.041 iops : min= 416, max= 512, avg=488.42, stdev=23.47, samples=19 00:27:50.041 lat (msec) : 50=99.67%, 100=0.33% 00:27:50.041 cpu : usr=98.99%, sys=0.72%, ctx=20, majf=0, minf=70 00:27:50.041 IO depths : 1=5.7%, 2=11.8%, 4=24.5%, 8=51.2%, 16=6.8%, 32=0.0%, >=64=0.0% 00:27:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=2958108: Wed Apr 24 20:59:12 2024 00:27:50.041 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10010msec) 00:27:50.041 slat (nsec): min=5964, max=88793, avg=30262.75, stdev=13976.90 00:27:50.041 clat (usec): min=11255, max=56174, avg=32326.32, stdev=1899.53 00:27:50.041 lat (usec): min=11268, max=56192, avg=32356.58, stdev=1899.08 00:27:50.041 clat percentiles (usec): 00:27:50.041 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:27:50.041 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:27:50.041 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.041 | 99.00th=[33817], 99.50th=[34341], 99.90th=[56361], 99.95th=[56361], 00:27:50.041 | 99.99th=[56361] 00:27:50.041 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1953.42, stdev=71.57, samples=19 00:27:50.041 iops : min= 448, max= 512, avg=488.32, stdev=17.84, samples=19 00:27:50.041 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:27:50.041 cpu : usr=99.08%, sys=0.62%, ctx=43, majf=0, minf=47 00:27:50.041 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=2958109: Wed Apr 24 20:59:12 2024 00:27:50.041 read: IOPS=496, BW=1987KiB/s (2035kB/s)(19.4MiB/10017msec) 00:27:50.041 slat (nsec): min=7814, max=69676, avg=10599.64, stdev=5660.26 00:27:50.041 clat (usec): min=3335, max=41690, avg=32117.39, stdev=3463.78 00:27:50.041 lat (usec): min=3347, max=41707, avg=32127.99, stdev=3463.68 00:27:50.041 clat percentiles (usec): 00:27:50.041 | 1.00th=[ 6456], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:27:50.041 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.041 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.041 | 99.00th=[33817], 99.50th=[34866], 99.90th=[41681], 99.95th=[41681], 00:27:50.041 | 99.99th=[41681] 00:27:50.041 bw ( KiB/s): min= 1920, max= 2436, per=4.21%, avg=1984.20, stdev=121.86, samples=20 00:27:50.041 iops : min= 480, max= 609, avg=496.05, stdev=30.46, samples=20 00:27:50.041 lat (msec) : 4=0.14%, 10=1.47%, 50=98.39% 00:27:50.041 cpu : usr=99.17%, sys=0.56%, ctx=9, majf=0, minf=58 00:27:50.041 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=2958110: Wed Apr 24 20:59:12 2024 00:27:50.041 read: IOPS=494, BW=1977KiB/s (2025kB/s)(19.3MiB/10005msec) 00:27:50.041 slat (nsec): min=6896, max=99306, avg=19322.73, stdev=15563.10 00:27:50.041 clat (usec): min=5203, max=79937, avg=32288.94, stdev=4188.72 00:27:50.041 lat (usec): min=5213, max=79955, avg=32308.27, stdev=4188.45 00:27:50.041 clat percentiles (usec): 00:27:50.041 | 1.00th=[20579], 5.00th=[25297], 10.00th=[27657], 20.00th=[32113], 00:27:50.041 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.041 | 70.00th=[32637], 80.00th=[32900], 90.00th=[35390], 95.00th=[39060], 00:27:50.041 | 99.00th=[45876], 99.50th=[49021], 99.90th=[63177], 99.95th=[80217], 00:27:50.041 | 99.99th=[80217] 00:27:50.041 bw ( KiB/s): min= 1712, max= 2064, per=4.18%, avg=1970.53, stdev=78.79, samples=19 00:27:50.041 iops : min= 428, max= 516, avg=492.63, stdev=19.70, samples=19 00:27:50.041 lat (msec) : 10=0.20%, 20=0.69%, 50=98.67%, 100=0.44% 00:27:50.041 cpu : usr=98.97%, sys=0.75%, ctx=10, majf=0, minf=102 00:27:50.041 IO depths : 1=0.1%, 2=0.2%, 4=2.3%, 8=80.4%, 16=17.1%, 32=0.0%, >=64=0.0% 00:27:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 complete : 0=0.0%, 4=89.3%, 8=9.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.041 filename1: (groupid=0, jobs=1): err= 0: pid=2958111: Wed Apr 24 20:59:12 2024 00:27:50.041 read: IOPS=490, BW=1960KiB/s (2008kB/s)(19.2MiB/10022msec) 00:27:50.041 slat (nsec): min=7760, max=74119, avg=15603.42, stdev=10851.23 00:27:50.041 clat (usec): min=28138, max=46104, avg=32523.12, stdev=918.45 00:27:50.041 lat (usec): min=28147, max=46125, avg=32538.72, stdev=917.91 00:27:50.041 clat percentiles (usec): 00:27:50.041 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32113], 20.00th=[32375], 00:27:50.041 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.041 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.041 | 99.00th=[34341], 99.50th=[34866], 99.90th=[45876], 99.95th=[45876], 00:27:50.041 | 99.99th=[45876] 00:27:50.041 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1958.20, stdev=73.23, samples=20 00:27:50.041 iops : min= 448, max= 512, avg=489.55, stdev=18.31, samples=20 00:27:50.041 lat (msec) : 50=100.00% 00:27:50.041 cpu : usr=98.80%, sys=0.78%, ctx=139, majf=0, minf=48 00:27:50.041 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.041 filename2: (groupid=0, jobs=1): err= 0: pid=2958112: Wed Apr 24 20:59:12 2024 00:27:50.041 read: IOPS=495, BW=1983KiB/s (2031kB/s)(19.4MiB/10005msec) 00:27:50.041 slat (usec): min=7, max=107, avg=18.66, stdev=16.81 00:27:50.041 clat (usec): min=9707, max=62915, avg=32171.14, stdev=3821.19 00:27:50.041 lat (usec): min=9714, max=62936, avg=32189.79, stdev=3820.21 00:27:50.041 clat percentiles (usec): 00:27:50.041 | 1.00th=[22152], 5.00th=[25822], 10.00th=[27657], 20.00th=[32113], 00:27:50.041 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.041 | 70.00th=[32637], 80.00th=[32900], 90.00th=[35390], 95.00th=[38011], 00:27:50.041 | 99.00th=[40109], 99.50th=[46400], 99.90th=[62653], 99.95th=[62653], 00:27:50.041 | 99.99th=[63177] 00:27:50.041 bw ( KiB/s): min= 1776, max= 2048, per=4.19%, avg=1977.26, stdev=64.95, samples=19 00:27:50.041 iops : min= 444, max= 512, avg=494.32, stdev=16.24, samples=19 00:27:50.041 lat (msec) : 10=0.12%, 20=0.36%, 50=99.03%, 100=0.48% 00:27:50.041 cpu : usr=99.03%, sys=0.69%, ctx=12, majf=0, minf=69 00:27:50.041 IO depths : 1=1.7%, 2=3.6%, 4=8.8%, 8=72.3%, 16=13.6%, 32=0.0%, >=64=0.0% 00:27:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 complete : 0=0.0%, 4=90.3%, 8=6.8%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.041 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.041 filename2: (groupid=0, jobs=1): err= 0: pid=2958113: Wed Apr 24 20:59:12 2024 00:27:50.041 read: IOPS=490, BW=1960KiB/s (2007kB/s)(19.2MiB/10023msec) 00:27:50.041 slat (nsec): min=7192, max=74419, avg=20134.93, stdev=12281.23 00:27:50.041 clat (usec): min=27883, max=46202, avg=32482.47, stdev=926.90 00:27:50.041 lat (usec): min=27898, max=46222, avg=32502.61, stdev=926.19 00:27:50.041 clat percentiles (usec): 00:27:50.041 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32113], 20.00th=[32113], 00:27:50.041 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.041 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.041 | 99.00th=[33817], 99.50th=[34866], 99.90th=[45876], 99.95th=[46400], 00:27:50.041 | 99.99th=[46400] 00:27:50.041 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1958.20, stdev=73.23, samples=20 00:27:50.041 iops : min= 448, max= 512, avg=489.55, stdev=18.31, samples=20 00:27:50.041 lat (msec) : 50=100.00% 00:27:50.041 cpu : usr=99.23%, sys=0.49%, ctx=9, majf=0, minf=54 00:27:50.041 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=2958114: Wed Apr 24 20:59:12 2024 00:27:50.042 read: IOPS=490, BW=1961KiB/s (2008kB/s)(19.2MiB/10020msec) 00:27:50.042 slat (usec): min=7, max=174, avg=23.65, stdev=13.29 00:27:50.042 clat (usec): min=27853, max=42397, avg=32407.36, stdev=762.31 00:27:50.042 lat (usec): min=27862, max=42418, avg=32431.01, stdev=761.96 00:27:50.042 clat percentiles (usec): 00:27:50.042 | 1.00th=[31065], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:27:50.042 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.042 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.042 | 99.00th=[33817], 99.50th=[34866], 99.90th=[42206], 99.95th=[42206], 00:27:50.042 | 99.99th=[42206] 00:27:50.042 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1960.42, stdev=74.55, samples=19 00:27:50.042 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:27:50.042 lat (msec) : 50=100.00% 00:27:50.042 cpu : usr=99.01%, sys=0.71%, ctx=11, majf=0, minf=53 00:27:50.042 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=2958115: Wed Apr 24 20:59:12 2024 00:27:50.042 read: IOPS=490, BW=1964KiB/s (2011kB/s)(19.2MiB/10006msec) 00:27:50.042 slat (nsec): min=6771, max=94174, avg=30756.02, stdev=15653.56 00:27:50.042 clat (usec): min=11274, max=51198, avg=32289.83, stdev=1772.43 00:27:50.042 lat (usec): min=11295, max=51217, avg=32320.59, stdev=1772.74 00:27:50.042 clat percentiles (usec): 00:27:50.042 | 1.00th=[31065], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:27:50.042 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:27:50.042 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[32900], 00:27:50.042 | 99.00th=[33817], 99.50th=[34341], 99.90th=[51119], 99.95th=[51119], 00:27:50.042 | 99.99th=[51119] 00:27:50.042 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1960.42, stdev=74.55, samples=19 00:27:50.042 iops : min= 448, max= 512, avg=490.11, stdev=18.64, samples=19 00:27:50.042 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:27:50.042 cpu : usr=98.49%, sys=0.92%, ctx=79, majf=0, minf=63 00:27:50.042 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=2958116: Wed Apr 24 20:59:12 2024 00:27:50.042 read: IOPS=490, BW=1960KiB/s (2007kB/s)(19.2MiB/10023msec) 00:27:50.042 slat (nsec): min=6803, max=73185, avg=24068.60, stdev=13197.29 00:27:50.042 clat (usec): min=27853, max=46302, avg=32423.30, stdev=934.96 00:27:50.042 lat (usec): min=27861, max=46322, avg=32447.37, stdev=934.78 00:27:50.042 clat percentiles (usec): 00:27:50.042 | 1.00th=[30802], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:27:50.042 | 30.00th=[32113], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.042 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.042 | 99.00th=[33817], 99.50th=[34866], 99.90th=[46400], 99.95th=[46400], 00:27:50.042 | 99.99th=[46400] 00:27:50.042 bw ( KiB/s): min= 1792, max= 2048, per=4.15%, avg=1958.20, stdev=73.23, samples=20 00:27:50.042 iops : min= 448, max= 512, avg=489.55, stdev=18.31, samples=20 00:27:50.042 lat (msec) : 50=100.00% 00:27:50.042 cpu : usr=98.99%, sys=0.74%, ctx=12, majf=0, minf=44 00:27:50.042 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=2958117: Wed Apr 24 20:59:12 2024 00:27:50.042 read: IOPS=494, BW=1980KiB/s (2027kB/s)(19.3MiB/10005msec) 00:27:50.042 slat (nsec): min=6786, max=67960, avg=18021.07, stdev=11095.47 00:27:50.042 clat (usec): min=14831, max=63500, avg=32178.46, stdev=3469.12 00:27:50.042 lat (usec): min=14839, max=63518, avg=32196.48, stdev=3469.73 00:27:50.042 clat percentiles (usec): 00:27:50.042 | 1.00th=[20579], 5.00th=[26084], 10.00th=[31065], 20.00th=[32113], 00:27:50.042 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.042 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[34866], 00:27:50.042 | 99.00th=[42730], 99.50th=[49021], 99.90th=[63701], 99.95th=[63701], 00:27:50.042 | 99.99th=[63701] 00:27:50.042 bw ( KiB/s): min= 1795, max= 2160, per=4.19%, avg=1977.42, stdev=83.68, samples=19 00:27:50.042 iops : min= 448, max= 540, avg=494.32, stdev=21.01, samples=19 00:27:50.042 lat (msec) : 20=0.69%, 50=98.83%, 100=0.48% 00:27:50.042 cpu : usr=99.08%, sys=0.62%, ctx=61, majf=0, minf=54 00:27:50.042 IO depths : 1=4.0%, 2=8.6%, 4=19.0%, 8=59.0%, 16=9.4%, 32=0.0%, >=64=0.0% 00:27:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 complete : 0=0.0%, 4=92.7%, 8=2.5%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 issued rwts: total=4952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=2958118: Wed Apr 24 20:59:12 2024 00:27:50.042 read: IOPS=491, BW=1966KiB/s (2013kB/s)(19.2MiB/10007msec) 00:27:50.042 slat (nsec): min=7791, max=59185, avg=16880.11, stdev=9941.01 00:27:50.042 clat (usec): min=22123, max=55735, avg=32411.83, stdev=1449.03 00:27:50.042 lat (usec): min=22132, max=55758, avg=32428.71, stdev=1449.14 00:27:50.042 clat percentiles (usec): 00:27:50.042 | 1.00th=[28181], 5.00th=[31851], 10.00th=[32113], 20.00th=[32113], 00:27:50.042 | 30.00th=[32375], 40.00th=[32375], 50.00th=[32375], 60.00th=[32375], 00:27:50.042 | 70.00th=[32637], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.042 | 99.00th=[33817], 99.50th=[39060], 99.90th=[48497], 99.95th=[48497], 00:27:50.042 | 99.99th=[55837] 00:27:50.042 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1962.95, stdev=60.35, samples=19 00:27:50.042 iops : min= 480, max= 512, avg=490.74, stdev=15.09, samples=19 00:27:50.042 lat (msec) : 50=99.96%, 100=0.04% 00:27:50.042 cpu : usr=98.60%, sys=0.92%, ctx=148, majf=0, minf=56 00:27:50.042 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.042 filename2: (groupid=0, jobs=1): err= 0: pid=2958120: Wed Apr 24 20:59:12 2024 00:27:50.042 read: IOPS=489, BW=1958KiB/s (2005kB/s)(19.1MiB/10003msec) 00:27:50.042 slat (nsec): min=7909, max=91919, avg=32632.55, stdev=16502.58 00:27:50.042 clat (usec): min=21747, max=60905, avg=32397.06, stdev=1758.19 00:27:50.042 lat (usec): min=21757, max=60927, avg=32429.70, stdev=1756.68 00:27:50.042 clat percentiles (usec): 00:27:50.042 | 1.00th=[31327], 5.00th=[31851], 10.00th=[31851], 20.00th=[32113], 00:27:50.042 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32375], 00:27:50.042 | 70.00th=[32375], 80.00th=[32637], 90.00th=[32900], 95.00th=[33162], 00:27:50.042 | 99.00th=[33817], 99.50th=[34341], 99.90th=[61080], 99.95th=[61080], 00:27:50.042 | 99.99th=[61080] 00:27:50.042 bw ( KiB/s): min= 1792, max= 2048, per=4.14%, avg=1953.68, stdev=71.93, samples=19 00:27:50.042 iops : min= 448, max= 512, avg=488.42, stdev=17.98, samples=19 00:27:50.042 lat (msec) : 50=99.67%, 100=0.33% 00:27:50.042 cpu : usr=98.16%, sys=1.07%, ctx=353, majf=0, minf=52 00:27:50.042 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:50.042 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:50.042 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:50.042 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:50.042 00:27:50.042 Run status group 0 (all jobs): 00:27:50.042 READ: bw=46.1MiB/s (48.3MB/s), 1956KiB/s-2002KiB/s (2003kB/s-2050kB/s), io=462MiB (484MB), run=10001-10025msec 00:27:50.042 20:59:12 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:50.042 20:59:12 -- target/dif.sh@43 -- # local sub 00:27:50.042 20:59:12 -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.042 20:59:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:50.042 20:59:12 -- target/dif.sh@36 -- # local sub_id=0 00:27:50.042 20:59:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:50.042 20:59:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.042 20:59:12 -- common/autotest_common.sh@10 -- # set +x 00:27:50.042 20:59:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.042 20:59:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:50.042 20:59:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.042 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.042 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.042 20:59:13 -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.042 20:59:13 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:50.042 20:59:13 -- target/dif.sh@36 -- # local sub_id=1 00:27:50.042 20:59:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:50.042 20:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.042 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.042 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.042 20:59:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:50.042 20:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.042 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.042 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.042 20:59:13 -- target/dif.sh@45 -- # for sub in "$@" 00:27:50.042 20:59:13 -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:50.042 20:59:13 -- target/dif.sh@36 -- # local sub_id=2 00:27:50.042 20:59:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:50.043 20:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.043 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.043 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.043 20:59:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:50.043 20:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.043 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.043 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.043 20:59:13 -- target/dif.sh@115 -- # NULL_DIF=1 00:27:50.043 20:59:13 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:50.043 20:59:13 -- target/dif.sh@115 -- # numjobs=2 00:27:50.043 20:59:13 -- target/dif.sh@115 -- # iodepth=8 00:27:50.043 20:59:13 -- target/dif.sh@115 -- # runtime=5 00:27:50.043 20:59:13 -- target/dif.sh@115 -- # files=1 00:27:50.043 20:59:13 -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:50.043 20:59:13 -- target/dif.sh@28 -- # local sub 00:27:50.043 20:59:13 -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.043 20:59:13 -- target/dif.sh@31 -- # create_subsystem 0 00:27:50.043 20:59:13 -- target/dif.sh@18 -- # local sub_id=0 00:27:50.043 20:59:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:50.043 20:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.043 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.043 bdev_null0 00:27:50.043 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.043 20:59:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:50.043 20:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.043 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.043 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.043 20:59:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:50.043 20:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.043 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.043 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.043 20:59:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:50.043 20:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.043 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.043 [2024-04-24 20:59:13.095532] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.043 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.043 20:59:13 -- target/dif.sh@30 -- # for sub in "$@" 00:27:50.043 20:59:13 -- target/dif.sh@31 -- # create_subsystem 1 00:27:50.043 20:59:13 -- target/dif.sh@18 -- # local sub_id=1 00:27:50.043 20:59:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:50.043 20:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.043 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.043 bdev_null1 00:27:50.043 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.043 20:59:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:50.043 20:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.043 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.043 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.043 20:59:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:50.043 20:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.043 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.043 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.043 20:59:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:50.043 20:59:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:50.043 20:59:13 -- common/autotest_common.sh@10 -- # set +x 00:27:50.043 20:59:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:50.043 20:59:13 -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:50.043 20:59:13 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:50.043 20:59:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:50.043 20:59:13 -- nvmf/common.sh@521 -- # config=() 00:27:50.043 20:59:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.043 20:59:13 -- nvmf/common.sh@521 -- # local subsystem config 00:27:50.043 20:59:13 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.043 20:59:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:50.043 20:59:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:50.043 { 00:27:50.043 "params": { 00:27:50.043 "name": "Nvme$subsystem", 00:27:50.043 "trtype": "$TEST_TRANSPORT", 00:27:50.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.043 "adrfam": "ipv4", 00:27:50.043 "trsvcid": "$NVMF_PORT", 00:27:50.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.043 "hdgst": ${hdgst:-false}, 00:27:50.043 "ddgst": ${ddgst:-false} 00:27:50.043 }, 00:27:50.043 "method": "bdev_nvme_attach_controller" 00:27:50.043 } 00:27:50.043 EOF 00:27:50.043 )") 00:27:50.043 20:59:13 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:50.043 20:59:13 -- target/dif.sh@82 -- # gen_fio_conf 00:27:50.043 20:59:13 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:50.043 20:59:13 -- target/dif.sh@54 -- # local file 00:27:50.043 20:59:13 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:50.043 20:59:13 -- target/dif.sh@56 -- # cat 00:27:50.043 20:59:13 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.043 20:59:13 -- common/autotest_common.sh@1327 -- # shift 00:27:50.043 20:59:13 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:50.043 20:59:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.043 20:59:13 -- nvmf/common.sh@543 -- # cat 00:27:50.043 20:59:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.043 20:59:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:50.043 20:59:13 -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.043 20:59:13 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:50.043 20:59:13 -- target/dif.sh@73 -- # cat 00:27:50.043 20:59:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:50.043 20:59:13 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:50.043 20:59:13 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:50.043 { 00:27:50.043 "params": { 00:27:50.043 "name": "Nvme$subsystem", 00:27:50.043 "trtype": "$TEST_TRANSPORT", 00:27:50.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:50.043 "adrfam": "ipv4", 00:27:50.043 "trsvcid": "$NVMF_PORT", 00:27:50.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:50.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:50.043 "hdgst": ${hdgst:-false}, 00:27:50.043 "ddgst": ${ddgst:-false} 00:27:50.043 }, 00:27:50.043 "method": "bdev_nvme_attach_controller" 00:27:50.043 } 00:27:50.043 EOF 00:27:50.043 )") 00:27:50.043 20:59:13 -- target/dif.sh@72 -- # (( file++ )) 00:27:50.043 20:59:13 -- target/dif.sh@72 -- # (( file <= files )) 00:27:50.043 20:59:13 -- nvmf/common.sh@543 -- # cat 00:27:50.043 20:59:13 -- nvmf/common.sh@545 -- # jq . 00:27:50.043 20:59:13 -- nvmf/common.sh@546 -- # IFS=, 00:27:50.043 20:59:13 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:50.043 "params": { 00:27:50.043 "name": "Nvme0", 00:27:50.043 "trtype": "tcp", 00:27:50.043 "traddr": "10.0.0.2", 00:27:50.043 "adrfam": "ipv4", 00:27:50.043 "trsvcid": "4420", 00:27:50.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:50.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:50.043 "hdgst": false, 00:27:50.043 "ddgst": false 00:27:50.043 }, 00:27:50.043 "method": "bdev_nvme_attach_controller" 00:27:50.043 },{ 00:27:50.043 "params": { 00:27:50.043 "name": "Nvme1", 00:27:50.043 "trtype": "tcp", 00:27:50.043 "traddr": "10.0.0.2", 00:27:50.043 "adrfam": "ipv4", 00:27:50.043 "trsvcid": "4420", 00:27:50.043 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:50.043 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:50.043 "hdgst": false, 00:27:50.043 "ddgst": false 00:27:50.043 }, 00:27:50.043 "method": "bdev_nvme_attach_controller" 00:27:50.043 }' 00:27:50.043 20:59:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:50.043 20:59:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:50.043 20:59:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:50.043 20:59:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:50.043 20:59:13 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:50.043 20:59:13 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:50.043 20:59:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:50.043 20:59:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:50.043 20:59:13 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:50.043 20:59:13 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:50.043 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:50.043 ... 00:27:50.043 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:50.043 ... 00:27:50.043 fio-3.35 00:27:50.043 Starting 4 threads 00:27:50.043 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.342 00:27:55.342 filename0: (groupid=0, jobs=1): err= 0: pid=2960443: Wed Apr 24 20:59:19 2024 00:27:55.342 read: IOPS=2065, BW=16.1MiB/s (16.9MB/s)(80.8MiB/5004msec) 00:27:55.342 slat (nsec): min=7749, max=37903, avg=8861.31, stdev=3114.64 00:27:55.342 clat (usec): min=1878, max=44111, avg=3848.01, stdev=1296.69 00:27:55.342 lat (usec): min=1886, max=44142, avg=3856.87, stdev=1296.79 00:27:55.342 clat percentiles (usec): 00:27:55.342 | 1.00th=[ 2737], 5.00th=[ 3032], 10.00th=[ 3195], 20.00th=[ 3425], 00:27:55.342 | 30.00th=[ 3523], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3752], 00:27:55.342 | 70.00th=[ 3851], 80.00th=[ 4113], 90.00th=[ 4883], 95.00th=[ 5407], 00:27:55.342 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 6587], 99.95th=[44303], 00:27:55.342 | 99.99th=[44303] 00:27:55.342 bw ( KiB/s): min=15216, max=17072, per=24.74%, avg=16529.60, stdev=508.23, samples=10 00:27:55.342 iops : min= 1902, max= 2134, avg=2066.20, stdev=63.53, samples=10 00:27:55.342 lat (msec) : 2=0.03%, 4=77.08%, 10=22.81%, 50=0.08% 00:27:55.342 cpu : usr=95.72%, sys=3.20%, ctx=48, majf=0, minf=0 00:27:55.342 IO depths : 1=0.1%, 2=0.4%, 4=71.8%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.342 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.343 issued rwts: total=10336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.343 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:55.343 filename0: (groupid=0, jobs=1): err= 0: pid=2960444: Wed Apr 24 20:59:19 2024 00:27:55.343 read: IOPS=2224, BW=17.4MiB/s (18.2MB/s)(86.9MiB/5002msec) 00:27:55.343 slat (nsec): min=7684, max=38810, avg=8535.60, stdev=2294.15 00:27:55.343 clat (usec): min=1689, max=5806, avg=3575.56, stdev=459.82 00:27:55.343 lat (usec): min=1697, max=5814, avg=3584.10, stdev=459.69 00:27:55.343 clat percentiles (usec): 00:27:55.343 | 1.00th=[ 2507], 5.00th=[ 2835], 10.00th=[ 3032], 20.00th=[ 3261], 00:27:55.343 | 30.00th=[ 3359], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3589], 00:27:55.343 | 70.00th=[ 3785], 80.00th=[ 3818], 90.00th=[ 4113], 95.00th=[ 4359], 00:27:55.343 | 99.00th=[ 4948], 99.50th=[ 5145], 99.90th=[ 5276], 99.95th=[ 5473], 00:27:55.343 | 99.99th=[ 5800] 00:27:55.343 bw ( KiB/s): min=17296, max=18340, per=26.64%, avg=17797.20, stdev=330.28, samples=10 00:27:55.343 iops : min= 2162, max= 2292, avg=2224.60, stdev=41.19, samples=10 00:27:55.343 lat (msec) : 2=0.16%, 4=88.53%, 10=11.30% 00:27:55.343 cpu : usr=97.56%, sys=2.02%, ctx=90, majf=0, minf=0 00:27:55.343 IO depths : 1=0.1%, 2=0.6%, 4=65.9%, 8=33.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.343 complete : 0=0.0%, 4=97.2%, 8=2.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.343 issued rwts: total=11129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.343 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:55.343 filename1: (groupid=0, jobs=1): err= 0: pid=2960445: Wed Apr 24 20:59:19 2024 00:27:55.343 read: IOPS=2017, BW=15.8MiB/s (16.5MB/s)(78.8MiB/5002msec) 00:27:55.343 slat (nsec): min=7749, max=42928, avg=8392.09, stdev=1994.35 00:27:55.343 clat (usec): min=2133, max=6586, avg=3942.23, stdev=699.04 00:27:55.343 lat (usec): min=2142, max=6598, avg=3950.63, stdev=698.91 00:27:55.343 clat percentiles (usec): 00:27:55.343 | 1.00th=[ 2933], 5.00th=[ 3261], 10.00th=[ 3425], 20.00th=[ 3490], 00:27:55.343 | 30.00th=[ 3556], 40.00th=[ 3687], 50.00th=[ 3720], 60.00th=[ 3785], 00:27:55.343 | 70.00th=[ 3884], 80.00th=[ 4146], 90.00th=[ 5276], 95.00th=[ 5538], 00:27:55.343 | 99.00th=[ 6063], 99.50th=[ 6063], 99.90th=[ 6390], 99.95th=[ 6390], 00:27:55.343 | 99.99th=[ 6587] 00:27:55.343 bw ( KiB/s): min=15856, max=16288, per=24.14%, avg=16122.67, stdev=127.50, samples=9 00:27:55.343 iops : min= 1982, max= 2036, avg=2015.33, stdev=15.94, samples=9 00:27:55.343 lat (msec) : 4=73.21%, 10=26.79% 00:27:55.343 cpu : usr=97.72%, sys=2.02%, ctx=6, majf=0, minf=9 00:27:55.343 IO depths : 1=0.1%, 2=0.1%, 4=73.1%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.343 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.343 issued rwts: total=10091,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.343 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:55.343 filename1: (groupid=0, jobs=1): err= 0: pid=2960446: Wed Apr 24 20:59:19 2024 00:27:55.343 read: IOPS=2043, BW=16.0MiB/s (16.7MB/s)(79.9MiB/5003msec) 00:27:55.343 slat (nsec): min=7746, max=36198, avg=8784.63, stdev=3131.63 00:27:55.343 clat (usec): min=2343, max=6831, avg=3889.30, stdev=693.43 00:27:55.343 lat (usec): min=2358, max=6839, avg=3898.09, stdev=693.26 00:27:55.343 clat percentiles (usec): 00:27:55.343 | 1.00th=[ 2835], 5.00th=[ 3163], 10.00th=[ 3326], 20.00th=[ 3458], 00:27:55.343 | 30.00th=[ 3556], 40.00th=[ 3621], 50.00th=[ 3687], 60.00th=[ 3752], 00:27:55.343 | 70.00th=[ 3818], 80.00th=[ 4080], 90.00th=[ 5276], 95.00th=[ 5473], 00:27:55.343 | 99.00th=[ 5997], 99.50th=[ 6063], 99.90th=[ 6390], 99.95th=[ 6390], 00:27:55.343 | 99.99th=[ 6849] 00:27:55.343 bw ( KiB/s): min=16144, max=16768, per=24.48%, avg=16350.40, stdev=179.99, samples=10 00:27:55.343 iops : min= 2018, max= 2096, avg=2044.20, stdev=22.58, samples=10 00:27:55.343 lat (msec) : 4=75.70%, 10=24.30% 00:27:55.343 cpu : usr=97.84%, sys=1.88%, ctx=6, majf=0, minf=0 00:27:55.343 IO depths : 1=0.1%, 2=0.1%, 4=72.9%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:55.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.343 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:55.343 issued rwts: total=10226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:55.343 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:55.343 00:27:55.343 Run status group 0 (all jobs): 00:27:55.343 READ: bw=65.2MiB/s (68.4MB/s), 15.8MiB/s-17.4MiB/s (16.5MB/s-18.2MB/s), io=326MiB (342MB), run=5002-5004msec 00:27:55.343 20:59:19 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:55.343 20:59:19 -- target/dif.sh@43 -- # local sub 00:27:55.343 20:59:19 -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.343 20:59:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:55.343 20:59:19 -- target/dif.sh@36 -- # local sub_id=0 00:27:55.343 20:59:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:55.343 20:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.343 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:27:55.343 20:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.343 20:59:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:55.343 20:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.343 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:27:55.343 20:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.343 20:59:19 -- target/dif.sh@45 -- # for sub in "$@" 00:27:55.343 20:59:19 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:55.343 20:59:19 -- target/dif.sh@36 -- # local sub_id=1 00:27:55.343 20:59:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:55.343 20:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.343 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:27:55.343 20:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.343 20:59:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:55.343 20:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.343 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:27:55.343 20:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.343 00:27:55.343 real 0m24.203s 00:27:55.343 user 5m19.089s 00:27:55.343 sys 0m3.927s 00:27:55.343 20:59:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:55.343 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:27:55.343 ************************************ 00:27:55.343 END TEST fio_dif_rand_params 00:27:55.343 ************************************ 00:27:55.343 20:59:19 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:55.343 20:59:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:55.343 20:59:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:55.343 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:27:55.343 ************************************ 00:27:55.343 START TEST fio_dif_digest 00:27:55.343 ************************************ 00:27:55.343 20:59:19 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:27:55.343 20:59:19 -- target/dif.sh@123 -- # local NULL_DIF 00:27:55.343 20:59:19 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:55.343 20:59:19 -- target/dif.sh@125 -- # local hdgst ddgst 00:27:55.343 20:59:19 -- target/dif.sh@127 -- # NULL_DIF=3 00:27:55.343 20:59:19 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:55.343 20:59:19 -- target/dif.sh@127 -- # numjobs=3 00:27:55.343 20:59:19 -- target/dif.sh@127 -- # iodepth=3 00:27:55.343 20:59:19 -- target/dif.sh@127 -- # runtime=10 00:27:55.343 20:59:19 -- target/dif.sh@128 -- # hdgst=true 00:27:55.343 20:59:19 -- target/dif.sh@128 -- # ddgst=true 00:27:55.343 20:59:19 -- target/dif.sh@130 -- # create_subsystems 0 00:27:55.343 20:59:19 -- target/dif.sh@28 -- # local sub 00:27:55.343 20:59:19 -- target/dif.sh@30 -- # for sub in "$@" 00:27:55.343 20:59:19 -- target/dif.sh@31 -- # create_subsystem 0 00:27:55.343 20:59:19 -- target/dif.sh@18 -- # local sub_id=0 00:27:55.343 20:59:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:55.343 20:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.343 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:27:55.343 bdev_null0 00:27:55.343 20:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.343 20:59:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:55.343 20:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.343 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:27:55.343 20:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.343 20:59:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:55.343 20:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.343 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:27:55.343 20:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.343 20:59:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:55.343 20:59:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.343 20:59:19 -- common/autotest_common.sh@10 -- # set +x 00:27:55.343 [2024-04-24 20:59:19.663092] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:55.343 20:59:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.343 20:59:19 -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:55.343 20:59:19 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:55.343 20:59:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:55.343 20:59:19 -- nvmf/common.sh@521 -- # config=() 00:27:55.343 20:59:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.343 20:59:19 -- nvmf/common.sh@521 -- # local subsystem config 00:27:55.343 20:59:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:55.343 20:59:19 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.343 20:59:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:55.343 { 00:27:55.343 "params": { 00:27:55.343 "name": "Nvme$subsystem", 00:27:55.343 "trtype": "$TEST_TRANSPORT", 00:27:55.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:55.343 "adrfam": "ipv4", 00:27:55.343 "trsvcid": "$NVMF_PORT", 00:27:55.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:55.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:55.343 "hdgst": ${hdgst:-false}, 00:27:55.343 "ddgst": ${ddgst:-false} 00:27:55.343 }, 00:27:55.343 "method": "bdev_nvme_attach_controller" 00:27:55.343 } 00:27:55.343 EOF 00:27:55.343 )") 00:27:55.343 20:59:19 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:55.343 20:59:19 -- target/dif.sh@82 -- # gen_fio_conf 00:27:55.344 20:59:19 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:55.344 20:59:19 -- target/dif.sh@54 -- # local file 00:27:55.344 20:59:19 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:55.344 20:59:19 -- target/dif.sh@56 -- # cat 00:27:55.344 20:59:19 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.344 20:59:19 -- common/autotest_common.sh@1327 -- # shift 00:27:55.344 20:59:19 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:55.344 20:59:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.344 20:59:19 -- nvmf/common.sh@543 -- # cat 00:27:55.344 20:59:19 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.344 20:59:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:55.344 20:59:19 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:55.344 20:59:19 -- target/dif.sh@72 -- # (( file <= files )) 00:27:55.344 20:59:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:55.344 20:59:19 -- nvmf/common.sh@545 -- # jq . 00:27:55.344 20:59:19 -- nvmf/common.sh@546 -- # IFS=, 00:27:55.344 20:59:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:55.344 "params": { 00:27:55.344 "name": "Nvme0", 00:27:55.344 "trtype": "tcp", 00:27:55.344 "traddr": "10.0.0.2", 00:27:55.344 "adrfam": "ipv4", 00:27:55.344 "trsvcid": "4420", 00:27:55.344 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:55.344 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:55.344 "hdgst": true, 00:27:55.344 "ddgst": true 00:27:55.344 }, 00:27:55.344 "method": "bdev_nvme_attach_controller" 00:27:55.344 }' 00:27:55.344 20:59:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:55.344 20:59:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:55.344 20:59:19 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:55.344 20:59:19 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:55.344 20:59:19 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:27:55.344 20:59:19 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:55.344 20:59:19 -- common/autotest_common.sh@1331 -- # asan_lib= 00:27:55.344 20:59:19 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:27:55.344 20:59:19 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:55.344 20:59:19 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:55.604 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:55.604 ... 00:27:55.604 fio-3.35 00:27:55.604 Starting 3 threads 00:27:55.604 EAL: No free 2048 kB hugepages reported on node 1 00:28:07.829 00:28:07.829 filename0: (groupid=0, jobs=1): err= 0: pid=2961901: Wed Apr 24 20:59:30 2024 00:28:07.829 read: IOPS=240, BW=30.0MiB/s (31.5MB/s)(302MiB/10048msec) 00:28:07.829 slat (nsec): min=8039, max=37111, avg=9016.35, stdev=1118.78 00:28:07.829 clat (usec): min=7599, max=54547, avg=12457.59, stdev=3384.16 00:28:07.829 lat (usec): min=7608, max=54557, avg=12466.61, stdev=3384.16 00:28:07.829 clat percentiles (usec): 00:28:07.829 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10028], 00:28:07.829 | 30.00th=[11207], 40.00th=[12256], 50.00th=[12780], 60.00th=[13173], 00:28:07.829 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14484], 95.00th=[14877], 00:28:07.829 | 99.00th=[16057], 99.50th=[16909], 99.90th=[54264], 99.95th=[54264], 00:28:07.829 | 99.99th=[54789] 00:28:07.829 bw ( KiB/s): min=27392, max=33024, per=37.79%, avg=30860.80, stdev=1311.87, samples=20 00:28:07.829 iops : min= 214, max= 258, avg=241.10, stdev=10.25, samples=20 00:28:07.829 lat (msec) : 10=19.47%, 20=80.07%, 50=0.04%, 100=0.41% 00:28:07.829 cpu : usr=95.00%, sys=4.71%, ctx=24, majf=0, minf=143 00:28:07.829 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:07.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.829 issued rwts: total=2414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.829 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:07.829 filename0: (groupid=0, jobs=1): err= 0: pid=2961903: Wed Apr 24 20:59:30 2024 00:28:07.829 read: IOPS=227, BW=28.5MiB/s (29.9MB/s)(285MiB/10005msec) 00:28:07.829 slat (nsec): min=8035, max=65087, avg=9064.86, stdev=1627.27 00:28:07.829 clat (usec): min=7003, max=18115, avg=13147.86, stdev=2066.54 00:28:07.829 lat (usec): min=7012, max=18123, avg=13156.93, stdev=2066.57 00:28:07.829 clat percentiles (usec): 00:28:07.829 | 1.00th=[ 8717], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10945], 00:28:07.829 | 30.00th=[11994], 40.00th=[13042], 50.00th=[13566], 60.00th=[13960], 00:28:07.829 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15533], 95.00th=[16057], 00:28:07.829 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17957], 99.95th=[17957], 00:28:07.829 | 99.99th=[18220] 00:28:07.829 bw ( KiB/s): min=26368, max=32256, per=35.71%, avg=29158.40, stdev=1522.24, samples=20 00:28:07.829 iops : min= 206, max= 252, avg=227.80, stdev=11.89, samples=20 00:28:07.829 lat (msec) : 10=9.16%, 20=90.84% 00:28:07.829 cpu : usr=95.12%, sys=4.59%, ctx=42, majf=0, minf=133 00:28:07.829 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:07.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.829 issued rwts: total=2281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.829 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:07.829 filename0: (groupid=0, jobs=1): err= 0: pid=2961904: Wed Apr 24 20:59:30 2024 00:28:07.829 read: IOPS=170, BW=21.3MiB/s (22.4MB/s)(214MiB/10044msec) 00:28:07.829 slat (nsec): min=8091, max=31500, avg=9003.48, stdev=918.44 00:28:07.829 clat (usec): min=8651, max=94954, avg=17534.95, stdev=12449.51 00:28:07.829 lat (usec): min=8660, max=94963, avg=17543.96, stdev=12449.58 00:28:07.829 clat percentiles (usec): 00:28:07.829 | 1.00th=[ 9896], 5.00th=[11731], 10.00th=[12256], 20.00th=[12780], 00:28:07.829 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:28:07.829 | 70.00th=[14222], 80.00th=[14746], 90.00th=[16581], 95.00th=[53740], 00:28:07.829 | 99.00th=[55313], 99.50th=[56361], 99.90th=[94897], 99.95th=[94897], 00:28:07.829 | 99.99th=[94897] 00:28:07.829 bw ( KiB/s): min=17152, max=28160, per=26.85%, avg=21926.40, stdev=3163.03, samples=20 00:28:07.829 iops : min= 134, max= 220, avg=171.30, stdev=24.71, samples=20 00:28:07.829 lat (msec) : 10=1.11%, 20=88.98%, 100=9.91% 00:28:07.829 cpu : usr=95.66%, sys=4.05%, ctx=25, majf=0, minf=123 00:28:07.829 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:07.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:07.829 issued rwts: total=1715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:07.829 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:07.829 00:28:07.829 Run status group 0 (all jobs): 00:28:07.829 READ: bw=79.7MiB/s (83.6MB/s), 21.3MiB/s-30.0MiB/s (22.4MB/s-31.5MB/s), io=801MiB (840MB), run=10005-10048msec 00:28:07.829 20:59:30 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:07.829 20:59:30 -- target/dif.sh@43 -- # local sub 00:28:07.829 20:59:30 -- target/dif.sh@45 -- # for sub in "$@" 00:28:07.829 20:59:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:07.829 20:59:30 -- target/dif.sh@36 -- # local sub_id=0 00:28:07.829 20:59:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:07.829 20:59:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.829 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:28:07.829 20:59:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.829 20:59:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:07.829 20:59:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.829 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:28:07.829 20:59:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.829 00:28:07.829 real 0m11.161s 00:28:07.829 user 0m41.174s 00:28:07.829 sys 0m1.663s 00:28:07.829 20:59:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:07.829 20:59:30 -- common/autotest_common.sh@10 -- # set +x 00:28:07.829 ************************************ 00:28:07.829 END TEST fio_dif_digest 00:28:07.829 ************************************ 00:28:07.829 20:59:30 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:07.829 20:59:30 -- target/dif.sh@147 -- # nvmftestfini 00:28:07.829 20:59:30 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:07.829 20:59:30 -- nvmf/common.sh@117 -- # sync 00:28:07.829 20:59:30 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:07.829 20:59:30 -- nvmf/common.sh@120 -- # set +e 00:28:07.829 20:59:30 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:07.829 20:59:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:07.829 rmmod nvme_tcp 00:28:07.829 rmmod nvme_fabrics 00:28:07.829 rmmod nvme_keyring 00:28:07.829 20:59:30 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:07.829 20:59:30 -- nvmf/common.sh@124 -- # set -e 00:28:07.829 20:59:30 -- nvmf/common.sh@125 -- # return 0 00:28:07.829 20:59:30 -- nvmf/common.sh@478 -- # '[' -n 2951462 ']' 00:28:07.829 20:59:30 -- nvmf/common.sh@479 -- # killprocess 2951462 00:28:07.829 20:59:30 -- common/autotest_common.sh@936 -- # '[' -z 2951462 ']' 00:28:07.829 20:59:30 -- common/autotest_common.sh@940 -- # kill -0 2951462 00:28:07.829 20:59:30 -- common/autotest_common.sh@941 -- # uname 00:28:07.830 20:59:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:07.830 20:59:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2951462 00:28:07.830 20:59:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:07.830 20:59:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:07.830 20:59:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2951462' 00:28:07.830 killing process with pid 2951462 00:28:07.830 20:59:30 -- common/autotest_common.sh@955 -- # kill 2951462 00:28:07.830 20:59:30 -- common/autotest_common.sh@960 -- # wait 2951462 00:28:07.830 20:59:31 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:07.830 20:59:31 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:09.212 Waiting for block devices as requested 00:28:09.473 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:09.473 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:09.473 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:09.473 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:09.734 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:09.734 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:09.734 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:09.995 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:09.995 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:10.256 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:10.256 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:10.256 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:10.516 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:10.516 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:10.516 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:10.516 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:10.775 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:11.035 20:59:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:11.035 20:59:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:11.035 20:59:35 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:11.035 20:59:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:11.035 20:59:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.035 20:59:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:11.035 20:59:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.575 20:59:37 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:13.575 00:28:13.575 real 1m17.323s 00:28:13.575 user 7m52.304s 00:28:13.575 sys 0m20.018s 00:28:13.575 20:59:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:13.576 20:59:37 -- common/autotest_common.sh@10 -- # set +x 00:28:13.576 ************************************ 00:28:13.576 END TEST nvmf_dif 00:28:13.576 ************************************ 00:28:13.576 20:59:37 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:13.576 20:59:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:13.576 20:59:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:13.576 20:59:37 -- common/autotest_common.sh@10 -- # set +x 00:28:13.576 ************************************ 00:28:13.576 START TEST nvmf_abort_qd_sizes 00:28:13.576 ************************************ 00:28:13.576 20:59:37 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:13.576 * Looking for test storage... 00:28:13.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:13.576 20:59:37 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.576 20:59:37 -- nvmf/common.sh@7 -- # uname -s 00:28:13.576 20:59:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.576 20:59:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.576 20:59:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.576 20:59:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.576 20:59:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.576 20:59:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.576 20:59:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.576 20:59:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.576 20:59:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.576 20:59:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.576 20:59:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:13.576 20:59:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:28:13.576 20:59:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.576 20:59:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.576 20:59:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.576 20:59:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:13.576 20:59:37 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.576 20:59:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.576 20:59:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.576 20:59:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.576 20:59:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.576 20:59:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.576 20:59:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.576 20:59:37 -- paths/export.sh@5 -- # export PATH 00:28:13.576 20:59:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.576 20:59:37 -- nvmf/common.sh@47 -- # : 0 00:28:13.576 20:59:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:13.576 20:59:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:13.576 20:59:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:13.576 20:59:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.576 20:59:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.576 20:59:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:13.576 20:59:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:13.576 20:59:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:13.576 20:59:37 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:13.576 20:59:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:13.576 20:59:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.576 20:59:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:13.576 20:59:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:13.576 20:59:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:13.576 20:59:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.576 20:59:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:13.576 20:59:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.576 20:59:37 -- nvmf/common.sh@403 -- # [[ phy != virt ]] 00:28:13.576 20:59:37 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:13.576 20:59:37 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:13.576 20:59:37 -- common/autotest_common.sh@10 -- # set +x 00:28:20.205 20:59:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:20.205 20:59:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:20.205 20:59:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:20.205 20:59:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:20.205 20:59:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:20.205 20:59:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:20.205 20:59:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:20.205 20:59:44 -- nvmf/common.sh@295 -- # net_devs=() 00:28:20.205 20:59:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:20.205 20:59:44 -- nvmf/common.sh@296 -- # e810=() 00:28:20.205 20:59:44 -- nvmf/common.sh@296 -- # local -ga e810 00:28:20.205 20:59:44 -- nvmf/common.sh@297 -- # x722=() 00:28:20.205 20:59:44 -- nvmf/common.sh@297 -- # local -ga x722 00:28:20.205 20:59:44 -- nvmf/common.sh@298 -- # mlx=() 00:28:20.205 20:59:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:20.205 20:59:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.205 20:59:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.205 20:59:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.205 20:59:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.205 20:59:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.205 20:59:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.205 20:59:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.205 20:59:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.205 20:59:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.205 20:59:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.205 20:59:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.205 20:59:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:20.205 20:59:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:20.205 20:59:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:20.205 20:59:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.205 20:59:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:20.205 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:20.205 20:59:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.205 20:59:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:20.205 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:20.205 20:59:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:20.205 20:59:44 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:20.205 20:59:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.205 20:59:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.205 20:59:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:20.205 20:59:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.205 20:59:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:20.205 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:20.205 20:59:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.205 20:59:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.205 20:59:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.206 20:59:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:20.206 20:59:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.206 20:59:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:20.206 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:20.206 20:59:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.206 20:59:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:20.206 20:59:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:20.206 20:59:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:20.206 20:59:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:20.206 20:59:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:20.206 20:59:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.206 20:59:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.206 20:59:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.206 20:59:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:20.206 20:59:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.206 20:59:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.206 20:59:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:20.206 20:59:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.206 20:59:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.206 20:59:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:20.206 20:59:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:20.206 20:59:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.206 20:59:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.206 20:59:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.466 20:59:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.466 20:59:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:20.466 20:59:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.466 20:59:44 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.466 20:59:44 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.466 20:59:44 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:20.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:28:20.466 00:28:20.466 --- 10.0.0.2 ping statistics --- 00:28:20.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.466 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:28:20.466 20:59:44 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.397 ms 00:28:20.466 00:28:20.466 --- 10.0.0.1 ping statistics --- 00:28:20.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.466 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:28:20.466 20:59:44 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.466 20:59:44 -- nvmf/common.sh@411 -- # return 0 00:28:20.466 20:59:44 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:20.466 20:59:44 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:23.765 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:23.765 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:23.765 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:23.765 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:23.765 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:23.765 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:23.765 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:23.765 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:23.765 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:23.765 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:23.765 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:24.025 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:24.025 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:24.025 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:24.025 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:24.025 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:24.025 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:24.286 20:59:48 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:24.286 20:59:48 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:24.286 20:59:48 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:24.286 20:59:48 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:24.286 20:59:48 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:24.286 20:59:48 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:24.286 20:59:48 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:24.286 20:59:48 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:24.286 20:59:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:24.286 20:59:48 -- common/autotest_common.sh@10 -- # set +x 00:28:24.286 20:59:48 -- nvmf/common.sh@470 -- # nvmfpid=2971244 00:28:24.286 20:59:48 -- nvmf/common.sh@471 -- # waitforlisten 2971244 00:28:24.286 20:59:48 -- common/autotest_common.sh@817 -- # '[' -z 2971244 ']' 00:28:24.286 20:59:48 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:24.286 20:59:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.286 20:59:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:24.286 20:59:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.286 20:59:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:24.286 20:59:48 -- common/autotest_common.sh@10 -- # set +x 00:28:24.286 [2024-04-24 20:59:48.918950] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:28:24.286 [2024-04-24 20:59:48.919015] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:24.546 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.546 [2024-04-24 20:59:49.007444] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:24.546 [2024-04-24 20:59:49.103048] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:24.546 [2024-04-24 20:59:49.103107] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:24.546 [2024-04-24 20:59:49.103116] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:24.546 [2024-04-24 20:59:49.103123] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:24.546 [2024-04-24 20:59:49.103129] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:24.546 [2024-04-24 20:59:49.103262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.546 [2024-04-24 20:59:49.103400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:24.546 [2024-04-24 20:59:49.103569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.546 [2024-04-24 20:59:49.103570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:25.486 20:59:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:25.486 20:59:49 -- common/autotest_common.sh@850 -- # return 0 00:28:25.486 20:59:49 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:25.486 20:59:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:25.486 20:59:49 -- common/autotest_common.sh@10 -- # set +x 00:28:25.486 20:59:49 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:25.486 20:59:49 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:25.486 20:59:49 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:25.486 20:59:49 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:25.486 20:59:49 -- scripts/common.sh@309 -- # local bdf bdfs 00:28:25.486 20:59:49 -- scripts/common.sh@310 -- # local nvmes 00:28:25.486 20:59:49 -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:28:25.486 20:59:49 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:25.486 20:59:49 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:25.486 20:59:49 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:28:25.486 20:59:49 -- scripts/common.sh@320 -- # uname -s 00:28:25.486 20:59:49 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:25.486 20:59:49 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:25.486 20:59:49 -- scripts/common.sh@325 -- # (( 1 )) 00:28:25.486 20:59:49 -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:28:25.486 20:59:49 -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:28:25.486 20:59:49 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:28:25.486 20:59:49 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:25.486 20:59:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:25.486 20:59:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:25.486 20:59:49 -- common/autotest_common.sh@10 -- # set +x 00:28:25.486 ************************************ 00:28:25.486 START TEST spdk_target_abort 00:28:25.486 ************************************ 00:28:25.486 20:59:49 -- common/autotest_common.sh@1111 -- # spdk_target 00:28:25.486 20:59:49 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:25.486 20:59:49 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:28:25.486 20:59:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.486 20:59:49 -- common/autotest_common.sh@10 -- # set +x 00:28:25.747 spdk_targetn1 00:28:25.747 20:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:25.747 20:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.747 20:59:50 -- common/autotest_common.sh@10 -- # set +x 00:28:25.747 [2024-04-24 20:59:50.301529] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:25.747 20:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:25.747 20:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.747 20:59:50 -- common/autotest_common.sh@10 -- # set +x 00:28:25.747 20:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:25.747 20:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.747 20:59:50 -- common/autotest_common.sh@10 -- # set +x 00:28:25.747 20:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:25.747 20:59:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.747 20:59:50 -- common/autotest_common.sh@10 -- # set +x 00:28:25.747 [2024-04-24 20:59:50.341791] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:25.747 20:59:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:25.747 20:59:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:25.748 20:59:50 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:25.748 20:59:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:25.748 20:59:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:25.748 20:59:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:25.748 20:59:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:25.748 20:59:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:25.748 20:59:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:25.748 20:59:50 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:25.748 20:59:50 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:25.748 20:59:50 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:25.748 20:59:50 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:25.748 EAL: No free 2048 kB hugepages reported on node 1 00:28:26.007 [2024-04-24 20:59:50.603613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2296 len:8 PRP1 0x2000078c6000 PRP2 0x0 00:28:26.007 [2024-04-24 20:59:50.603639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:29.307 Initializing NVMe Controllers 00:28:29.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:29.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:29.307 Initialization complete. Launching workers. 00:28:29.307 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12499, failed: 1 00:28:29.307 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2990, failed to submit 9510 00:28:29.307 success 718, unsuccess 2272, failed 0 00:28:29.307 20:59:53 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:29.307 20:59:53 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:29.307 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.307 [2024-04-24 20:59:53.735810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:175 nsid:1 lba:296 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:28:29.307 [2024-04-24 20:59:53.735855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:175 cdw0:0 sqhd:002f p:1 m:0 dnr:0 00:28:29.307 [2024-04-24 20:59:53.743970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:480 len:8 PRP1 0x200007c44000 PRP2 0x0 00:28:29.307 [2024-04-24 20:59:53.743992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:004d p:1 m:0 dnr:0 00:28:29.307 [2024-04-24 20:59:53.814905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:2192 len:8 PRP1 0x200007c54000 PRP2 0x0 00:28:29.307 [2024-04-24 20:59:53.814929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:29.307 [2024-04-24 20:59:53.830931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:2488 len:8 PRP1 0x200007c58000 PRP2 0x0 00:28:29.307 [2024-04-24 20:59:53.830955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:29.307 [2024-04-24 20:59:53.854796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:3032 len:8 PRP1 0x200007c46000 PRP2 0x0 00:28:29.307 [2024-04-24 20:59:53.854818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0081 p:0 m:0 dnr:0 00:28:31.217 [2024-04-24 20:59:55.747033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:46072 len:8 PRP1 0x200007c48000 PRP2 0x0 00:28:31.217 [2024-04-24 20:59:55.747073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:0084 p:1 m:0 dnr:0 00:28:31.477 [2024-04-24 20:59:55.938930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:189 nsid:1 lba:50496 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:28:31.477 [2024-04-24 20:59:55.938959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:189 cdw0:0 sqhd:00ad p:1 m:0 dnr:0 00:28:32.046 [2024-04-24 20:59:56.680997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:67720 len:8 PRP1 0x200007c44000 PRP2 0x0 00:28:32.046 [2024-04-24 20:59:56.681026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:32.306 Initializing NVMe Controllers 00:28:32.306 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:32.306 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:32.306 Initialization complete. Launching workers. 00:28:32.306 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8595, failed: 8 00:28:32.306 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1224, failed to submit 7379 00:28:32.306 success 372, unsuccess 852, failed 0 00:28:32.306 20:59:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:32.306 20:59:56 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:32.306 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.566 [2024-04-24 20:59:56.978586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:155 nsid:1 lba:856 len:8 PRP1 0x2000078d0000 PRP2 0x0 00:28:32.566 [2024-04-24 20:59:56.978610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:155 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:33.502 [2024-04-24 20:59:57.960469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:149 nsid:1 lba:111272 len:8 PRP1 0x2000078ec000 PRP2 0x0 00:28:33.502 [2024-04-24 20:59:57.960494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:149 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:35.461 Initializing NVMe Controllers 00:28:35.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:35.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:35.461 Initialization complete. Launching workers. 00:28:35.461 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42014, failed: 2 00:28:35.461 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2560, failed to submit 39456 00:28:35.461 success 606, unsuccess 1954, failed 0 00:28:35.461 20:59:59 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:35.461 20:59:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.461 20:59:59 -- common/autotest_common.sh@10 -- # set +x 00:28:35.461 21:00:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:35.461 21:00:00 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:35.461 21:00:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:35.461 21:00:00 -- common/autotest_common.sh@10 -- # set +x 00:28:37.378 21:00:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.378 21:00:01 -- target/abort_qd_sizes.sh@61 -- # killprocess 2971244 00:28:37.378 21:00:01 -- common/autotest_common.sh@936 -- # '[' -z 2971244 ']' 00:28:37.378 21:00:01 -- common/autotest_common.sh@940 -- # kill -0 2971244 00:28:37.378 21:00:01 -- common/autotest_common.sh@941 -- # uname 00:28:37.378 21:00:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:37.378 21:00:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2971244 00:28:37.378 21:00:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:37.378 21:00:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:37.378 21:00:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2971244' 00:28:37.378 killing process with pid 2971244 00:28:37.378 21:00:01 -- common/autotest_common.sh@955 -- # kill 2971244 00:28:37.378 21:00:01 -- common/autotest_common.sh@960 -- # wait 2971244 00:28:37.645 00:28:37.645 real 0m12.048s 00:28:37.645 user 0m49.789s 00:28:37.645 sys 0m1.713s 00:28:37.645 21:00:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:37.645 21:00:02 -- common/autotest_common.sh@10 -- # set +x 00:28:37.645 ************************************ 00:28:37.645 END TEST spdk_target_abort 00:28:37.645 ************************************ 00:28:37.645 21:00:02 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:37.645 21:00:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:37.645 21:00:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:37.645 21:00:02 -- common/autotest_common.sh@10 -- # set +x 00:28:37.645 ************************************ 00:28:37.645 START TEST kernel_target_abort 00:28:37.645 ************************************ 00:28:37.645 21:00:02 -- common/autotest_common.sh@1111 -- # kernel_target 00:28:37.645 21:00:02 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:37.645 21:00:02 -- nvmf/common.sh@717 -- # local ip 00:28:37.645 21:00:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:37.645 21:00:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:37.645 21:00:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.645 21:00:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.645 21:00:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:37.645 21:00:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.645 21:00:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:37.645 21:00:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:37.645 21:00:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:37.645 21:00:02 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:37.645 21:00:02 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:37.645 21:00:02 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:28:37.645 21:00:02 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:37.645 21:00:02 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:37.645 21:00:02 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:37.645 21:00:02 -- nvmf/common.sh@628 -- # local block nvme 00:28:37.645 21:00:02 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:28:37.645 21:00:02 -- nvmf/common.sh@631 -- # modprobe nvmet 00:28:37.645 21:00:02 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:37.645 21:00:02 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:41.844 Waiting for block devices as requested 00:28:41.844 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:41.844 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:41.844 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:41.844 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:41.844 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:41.844 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:41.844 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:41.844 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:41.844 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:42.104 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:42.104 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:42.104 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:42.364 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:42.364 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:42.364 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:42.625 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:42.625 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:42.886 21:00:07 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:28:42.886 21:00:07 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:42.886 21:00:07 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:28:42.886 21:00:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:42.886 21:00:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:42.886 21:00:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:42.886 21:00:07 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:28:42.886 21:00:07 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:42.886 21:00:07 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:42.886 No valid GPT data, bailing 00:28:42.886 21:00:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:42.886 21:00:07 -- scripts/common.sh@391 -- # pt= 00:28:42.886 21:00:07 -- scripts/common.sh@392 -- # return 1 00:28:42.886 21:00:07 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:28:42.886 21:00:07 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme0n1 ]] 00:28:42.886 21:00:07 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:42.886 21:00:07 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:42.886 21:00:07 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:42.886 21:00:07 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:42.886 21:00:07 -- nvmf/common.sh@656 -- # echo 1 00:28:42.886 21:00:07 -- nvmf/common.sh@657 -- # echo /dev/nvme0n1 00:28:42.886 21:00:07 -- nvmf/common.sh@658 -- # echo 1 00:28:42.886 21:00:07 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:28:42.886 21:00:07 -- nvmf/common.sh@661 -- # echo tcp 00:28:42.886 21:00:07 -- nvmf/common.sh@662 -- # echo 4420 00:28:42.886 21:00:07 -- nvmf/common.sh@663 -- # echo ipv4 00:28:42.886 21:00:07 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:42.886 21:00:07 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 --hostid=80c5a598-37ec-ec11-9bc7-a4bf01928204 -a 10.0.0.1 -t tcp -s 4420 00:28:43.146 00:28:43.146 Discovery Log Number of Records 2, Generation counter 2 00:28:43.146 =====Discovery Log Entry 0====== 00:28:43.146 trtype: tcp 00:28:43.146 adrfam: ipv4 00:28:43.146 subtype: current discovery subsystem 00:28:43.146 treq: not specified, sq flow control disable supported 00:28:43.146 portid: 1 00:28:43.146 trsvcid: 4420 00:28:43.146 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:43.146 traddr: 10.0.0.1 00:28:43.146 eflags: none 00:28:43.146 sectype: none 00:28:43.146 =====Discovery Log Entry 1====== 00:28:43.146 trtype: tcp 00:28:43.146 adrfam: ipv4 00:28:43.146 subtype: nvme subsystem 00:28:43.146 treq: not specified, sq flow control disable supported 00:28:43.146 portid: 1 00:28:43.146 trsvcid: 4420 00:28:43.146 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:43.146 traddr: 10.0.0.1 00:28:43.146 eflags: none 00:28:43.146 sectype: none 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.146 21:00:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:43.147 21:00:07 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:43.147 21:00:07 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:43.147 21:00:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:43.147 21:00:07 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:43.147 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.464 Initializing NVMe Controllers 00:28:46.464 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:46.464 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:46.464 Initialization complete. Launching workers. 00:28:46.464 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65464, failed: 0 00:28:46.464 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 65464, failed to submit 0 00:28:46.464 success 0, unsuccess 65464, failed 0 00:28:46.464 21:00:10 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:46.464 21:00:10 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:46.464 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.801 Initializing NVMe Controllers 00:28:49.801 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:49.801 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:49.801 Initialization complete. Launching workers. 00:28:49.801 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 105332, failed: 0 00:28:49.801 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 26554, failed to submit 78778 00:28:49.801 success 0, unsuccess 26554, failed 0 00:28:49.801 21:00:13 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:49.801 21:00:13 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:49.801 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.347 Initializing NVMe Controllers 00:28:52.347 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:52.347 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:52.347 Initialization complete. Launching workers. 00:28:52.347 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 103619, failed: 0 00:28:52.347 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25926, failed to submit 77693 00:28:52.347 success 0, unsuccess 25926, failed 0 00:28:52.347 21:00:16 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:52.347 21:00:16 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:52.347 21:00:16 -- nvmf/common.sh@675 -- # echo 0 00:28:52.347 21:00:16 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:52.347 21:00:16 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:52.347 21:00:16 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:52.347 21:00:16 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:52.347 21:00:16 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:52.347 21:00:16 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:52.347 21:00:16 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:55.698 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:55.698 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:55.959 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:57.873 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:58.133 00:28:58.133 real 0m20.362s 00:28:58.133 user 0m9.656s 00:28:58.133 sys 0m6.191s 00:28:58.133 21:00:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:58.133 21:00:22 -- common/autotest_common.sh@10 -- # set +x 00:28:58.133 ************************************ 00:28:58.133 END TEST kernel_target_abort 00:28:58.133 ************************************ 00:28:58.133 21:00:22 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:58.133 21:00:22 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:58.133 21:00:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:58.133 21:00:22 -- nvmf/common.sh@117 -- # sync 00:28:58.133 21:00:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:58.133 21:00:22 -- nvmf/common.sh@120 -- # set +e 00:28:58.133 21:00:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:58.133 21:00:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:58.133 rmmod nvme_tcp 00:28:58.133 rmmod nvme_fabrics 00:28:58.133 rmmod nvme_keyring 00:28:58.133 21:00:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:58.133 21:00:22 -- nvmf/common.sh@124 -- # set -e 00:28:58.133 21:00:22 -- nvmf/common.sh@125 -- # return 0 00:28:58.133 21:00:22 -- nvmf/common.sh@478 -- # '[' -n 2971244 ']' 00:28:58.133 21:00:22 -- nvmf/common.sh@479 -- # killprocess 2971244 00:28:58.133 21:00:22 -- common/autotest_common.sh@936 -- # '[' -z 2971244 ']' 00:28:58.133 21:00:22 -- common/autotest_common.sh@940 -- # kill -0 2971244 00:28:58.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2971244) - No such process 00:28:58.133 21:00:22 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2971244 is not found' 00:28:58.133 Process with pid 2971244 is not found 00:28:58.133 21:00:22 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:58.133 21:00:22 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:01.432 Waiting for block devices as requested 00:29:01.432 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:01.432 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:01.432 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:01.692 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:01.692 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:01.692 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:01.952 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:01.952 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:01.952 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:29:02.214 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:29:02.214 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:29:02.474 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:29:02.474 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:29:02.474 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:29:02.735 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:29:02.735 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:29:02.735 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:29:02.995 21:00:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:02.995 21:00:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:02.996 21:00:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:02.996 21:00:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:02.996 21:00:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.996 21:00:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:02.996 21:00:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.555 21:00:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:05.555 00:29:05.555 real 0m51.899s 00:29:05.555 user 1m4.695s 00:29:05.555 sys 0m18.572s 00:29:05.555 21:00:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:05.555 21:00:29 -- common/autotest_common.sh@10 -- # set +x 00:29:05.555 ************************************ 00:29:05.555 END TEST nvmf_abort_qd_sizes 00:29:05.555 ************************************ 00:29:05.555 21:00:29 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:05.555 21:00:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:05.555 21:00:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:05.555 21:00:29 -- common/autotest_common.sh@10 -- # set +x 00:29:05.555 ************************************ 00:29:05.556 START TEST keyring_file 00:29:05.556 ************************************ 00:29:05.556 21:00:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:29:05.556 * Looking for test storage... 00:29:05.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:29:05.556 21:00:29 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:29:05.556 21:00:29 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:05.556 21:00:29 -- nvmf/common.sh@7 -- # uname -s 00:29:05.556 21:00:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:05.556 21:00:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:05.556 21:00:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:05.556 21:00:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:05.556 21:00:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:05.556 21:00:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:05.556 21:00:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:05.556 21:00:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:05.556 21:00:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:05.556 21:00:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:05.556 21:00:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:05.556 21:00:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=80c5a598-37ec-ec11-9bc7-a4bf01928204 00:29:05.556 21:00:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:05.556 21:00:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:05.556 21:00:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:05.556 21:00:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:05.556 21:00:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:05.556 21:00:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:05.556 21:00:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:05.556 21:00:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:05.556 21:00:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.556 21:00:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.556 21:00:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.556 21:00:29 -- paths/export.sh@5 -- # export PATH 00:29:05.556 21:00:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:05.556 21:00:29 -- nvmf/common.sh@47 -- # : 0 00:29:05.556 21:00:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:05.556 21:00:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:05.556 21:00:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:05.556 21:00:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:05.556 21:00:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:05.556 21:00:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:05.556 21:00:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:05.556 21:00:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:05.556 21:00:30 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:05.556 21:00:30 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:05.556 21:00:30 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:05.556 21:00:30 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:05.556 21:00:30 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:05.556 21:00:30 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:05.556 21:00:30 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:05.556 21:00:30 -- keyring/common.sh@15 -- # local name key digest path 00:29:05.556 21:00:30 -- keyring/common.sh@17 -- # name=key0 00:29:05.556 21:00:30 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:05.556 21:00:30 -- keyring/common.sh@17 -- # digest=0 00:29:05.556 21:00:30 -- keyring/common.sh@18 -- # mktemp 00:29:05.556 21:00:30 -- keyring/common.sh@18 -- # path=/tmp/tmp.AwvPsPOxie 00:29:05.556 21:00:30 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:05.556 21:00:30 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:05.556 21:00:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:05.556 21:00:30 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:05.556 21:00:30 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:05.556 21:00:30 -- nvmf/common.sh@693 -- # digest=0 00:29:05.556 21:00:30 -- nvmf/common.sh@694 -- # python - 00:29:05.556 21:00:30 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.AwvPsPOxie 00:29:05.556 21:00:30 -- keyring/common.sh@23 -- # echo /tmp/tmp.AwvPsPOxie 00:29:05.556 21:00:30 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.AwvPsPOxie 00:29:05.556 21:00:30 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:05.556 21:00:30 -- keyring/common.sh@15 -- # local name key digest path 00:29:05.556 21:00:30 -- keyring/common.sh@17 -- # name=key1 00:29:05.556 21:00:30 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:05.556 21:00:30 -- keyring/common.sh@17 -- # digest=0 00:29:05.556 21:00:30 -- keyring/common.sh@18 -- # mktemp 00:29:05.556 21:00:30 -- keyring/common.sh@18 -- # path=/tmp/tmp.WXP2mffSfG 00:29:05.556 21:00:30 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:05.556 21:00:30 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:05.556 21:00:30 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:05.556 21:00:30 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:05.556 21:00:30 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:29:05.556 21:00:30 -- nvmf/common.sh@693 -- # digest=0 00:29:05.556 21:00:30 -- nvmf/common.sh@694 -- # python - 00:29:05.556 21:00:30 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WXP2mffSfG 00:29:05.556 21:00:30 -- keyring/common.sh@23 -- # echo /tmp/tmp.WXP2mffSfG 00:29:05.556 21:00:30 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.WXP2mffSfG 00:29:05.556 21:00:30 -- keyring/file.sh@30 -- # tgtpid=2982188 00:29:05.556 21:00:30 -- keyring/file.sh@32 -- # waitforlisten 2982188 00:29:05.556 21:00:30 -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:29:05.556 21:00:30 -- common/autotest_common.sh@817 -- # '[' -z 2982188 ']' 00:29:05.556 21:00:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.556 21:00:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:05.556 21:00:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.556 21:00:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:05.556 21:00:30 -- common/autotest_common.sh@10 -- # set +x 00:29:05.556 [2024-04-24 21:00:30.189697] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:29:05.556 [2024-04-24 21:00:30.189763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2982188 ] 00:29:05.817 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.817 [2024-04-24 21:00:30.266983] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.817 [2024-04-24 21:00:30.336547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.758 21:00:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:06.758 21:00:31 -- common/autotest_common.sh@850 -- # return 0 00:29:06.758 21:00:31 -- keyring/file.sh@33 -- # rpc_cmd 00:29:06.758 21:00:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.758 21:00:31 -- common/autotest_common.sh@10 -- # set +x 00:29:06.758 [2024-04-24 21:00:31.066794] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.758 null0 00:29:06.758 [2024-04-24 21:00:31.098830] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:06.758 [2024-04-24 21:00:31.099176] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:06.758 [2024-04-24 21:00:31.106848] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:06.758 21:00:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.758 21:00:31 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:06.758 21:00:31 -- common/autotest_common.sh@638 -- # local es=0 00:29:06.758 21:00:31 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:06.758 21:00:31 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:06.758 21:00:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:06.758 21:00:31 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:06.758 21:00:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:06.758 21:00:31 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:06.758 21:00:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.758 21:00:31 -- common/autotest_common.sh@10 -- # set +x 00:29:06.758 [2024-04-24 21:00:31.122880] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:29:06.758 { 00:29:06.758 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:06.758 "secure_channel": false, 00:29:06.758 "listen_address": { 00:29:06.758 "trtype": "tcp", 00:29:06.758 "traddr": "127.0.0.1", 00:29:06.758 "trsvcid": "4420" 00:29:06.758 }, 00:29:06.758 "method": "nvmf_subsystem_add_listener", 00:29:06.758 "req_id": 1 00:29:06.758 } 00:29:06.758 Got JSON-RPC error response 00:29:06.758 response: 00:29:06.758 { 00:29:06.758 "code": -32602, 00:29:06.758 "message": "Invalid parameters" 00:29:06.758 } 00:29:06.758 21:00:31 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:06.758 21:00:31 -- common/autotest_common.sh@641 -- # es=1 00:29:06.758 21:00:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:06.758 21:00:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:06.758 21:00:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:06.758 21:00:31 -- keyring/file.sh@46 -- # bperfpid=2982211 00:29:06.758 21:00:31 -- keyring/file.sh@48 -- # waitforlisten 2982211 /var/tmp/bperf.sock 00:29:06.758 21:00:31 -- common/autotest_common.sh@817 -- # '[' -z 2982211 ']' 00:29:06.758 21:00:31 -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:06.758 21:00:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.758 21:00:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:06.758 21:00:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.759 21:00:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:06.759 21:00:31 -- common/autotest_common.sh@10 -- # set +x 00:29:06.759 [2024-04-24 21:00:31.189157] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:29:06.759 [2024-04-24 21:00:31.189251] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2982211 ] 00:29:06.759 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.759 [2024-04-24 21:00:31.252486] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.759 [2024-04-24 21:00:31.327010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.019 21:00:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:07.019 21:00:31 -- common/autotest_common.sh@850 -- # return 0 00:29:07.019 21:00:31 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AwvPsPOxie 00:29:07.019 21:00:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AwvPsPOxie 00:29:07.019 21:00:31 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WXP2mffSfG 00:29:07.019 21:00:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WXP2mffSfG 00:29:07.278 21:00:31 -- keyring/file.sh@51 -- # get_key key0 00:29:07.278 21:00:31 -- keyring/file.sh@51 -- # jq -r .path 00:29:07.278 21:00:31 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.278 21:00:31 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:07.278 21:00:31 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.537 21:00:32 -- keyring/file.sh@51 -- # [[ /tmp/tmp.AwvPsPOxie == \/\t\m\p\/\t\m\p\.\A\w\v\P\s\P\O\x\i\e ]] 00:29:07.537 21:00:32 -- keyring/file.sh@52 -- # get_key key1 00:29:07.537 21:00:32 -- keyring/file.sh@52 -- # jq -r .path 00:29:07.537 21:00:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.538 21:00:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:07.538 21:00:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.796 21:00:32 -- keyring/file.sh@52 -- # [[ /tmp/tmp.WXP2mffSfG == \/\t\m\p\/\t\m\p\.\W\X\P\2\m\f\f\S\f\G ]] 00:29:07.796 21:00:32 -- keyring/file.sh@53 -- # get_refcnt key0 00:29:07.796 21:00:32 -- keyring/common.sh@12 -- # get_key key0 00:29:07.796 21:00:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.796 21:00:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.796 21:00:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:07.796 21:00:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.055 21:00:32 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:08.055 21:00:32 -- keyring/file.sh@54 -- # get_refcnt key1 00:29:08.055 21:00:32 -- keyring/common.sh@12 -- # get_key key1 00:29:08.055 21:00:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:08.055 21:00:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.055 21:00:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.055 21:00:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:08.055 21:00:32 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:08.055 21:00:32 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:08.055 21:00:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:08.315 [2024-04-24 21:00:32.828781] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:08.315 nvme0n1 00:29:08.315 21:00:32 -- keyring/file.sh@59 -- # get_refcnt key0 00:29:08.315 21:00:32 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:08.315 21:00:32 -- keyring/common.sh@12 -- # get_key key0 00:29:08.315 21:00:32 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.315 21:00:32 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:08.315 21:00:32 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.578 21:00:33 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:08.578 21:00:33 -- keyring/file.sh@60 -- # get_refcnt key1 00:29:08.578 21:00:33 -- keyring/common.sh@12 -- # get_key key1 00:29:08.578 21:00:33 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:08.578 21:00:33 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.578 21:00:33 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:08.578 21:00:33 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.838 21:00:33 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:08.838 21:00:33 -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:08.838 Running I/O for 1 seconds... 00:29:10.227 00:29:10.227 Latency(us) 00:29:10.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.227 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:10.227 nvme0n1 : 1.00 13935.93 54.44 0.00 0.00 9159.37 4614.83 17913.17 00:29:10.227 =================================================================================================================== 00:29:10.227 Total : 13935.93 54.44 0.00 0.00 9159.37 4614.83 17913.17 00:29:10.227 0 00:29:10.227 21:00:34 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:10.227 21:00:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:10.227 21:00:34 -- keyring/file.sh@65 -- # get_refcnt key0 00:29:10.227 21:00:34 -- keyring/common.sh@12 -- # get_key key0 00:29:10.227 21:00:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.227 21:00:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.227 21:00:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:10.227 21:00:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.485 21:00:34 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:10.485 21:00:34 -- keyring/file.sh@66 -- # get_refcnt key1 00:29:10.485 21:00:34 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.485 21:00:34 -- keyring/common.sh@12 -- # get_key key1 00:29:10.485 21:00:34 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.485 21:00:34 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.485 21:00:34 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:10.485 21:00:35 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:10.485 21:00:35 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:10.485 21:00:35 -- common/autotest_common.sh@638 -- # local es=0 00:29:10.485 21:00:35 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:10.485 21:00:35 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:10.485 21:00:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:10.485 21:00:35 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:10.485 21:00:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:10.485 21:00:35 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:10.485 21:00:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:10.743 [2024-04-24 21:00:35.302379] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:10.743 [2024-04-24 21:00:35.302645] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f3a60 (107): Transport endpoint is not connected 00:29:10.743 [2024-04-24 21:00:35.303639] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f3a60 (9): Bad file descriptor 00:29:10.743 [2024-04-24 21:00:35.304640] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:10.743 [2024-04-24 21:00:35.304650] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:10.743 [2024-04-24 21:00:35.304657] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:10.743 request: 00:29:10.743 { 00:29:10.743 "name": "nvme0", 00:29:10.743 "trtype": "tcp", 00:29:10.743 "traddr": "127.0.0.1", 00:29:10.743 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:10.743 "adrfam": "ipv4", 00:29:10.743 "trsvcid": "4420", 00:29:10.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.743 "psk": "key1", 00:29:10.743 "method": "bdev_nvme_attach_controller", 00:29:10.743 "req_id": 1 00:29:10.743 } 00:29:10.743 Got JSON-RPC error response 00:29:10.743 response: 00:29:10.743 { 00:29:10.743 "code": -32602, 00:29:10.743 "message": "Invalid parameters" 00:29:10.743 } 00:29:10.743 21:00:35 -- common/autotest_common.sh@641 -- # es=1 00:29:10.743 21:00:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:10.743 21:00:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:10.743 21:00:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:10.743 21:00:35 -- keyring/file.sh@71 -- # get_refcnt key0 00:29:10.743 21:00:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.743 21:00:35 -- keyring/common.sh@12 -- # get_key key0 00:29:10.743 21:00:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.743 21:00:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:10.743 21:00:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:11.007 21:00:35 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:11.007 21:00:35 -- keyring/file.sh@72 -- # get_refcnt key1 00:29:11.008 21:00:35 -- keyring/common.sh@12 -- # get_key key1 00:29:11.008 21:00:35 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:11.008 21:00:35 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:11.008 21:00:35 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:11.008 21:00:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:11.271 21:00:35 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:11.271 21:00:35 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:11.271 21:00:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:11.531 21:00:35 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:11.531 21:00:35 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:11.531 21:00:36 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:11.531 21:00:36 -- keyring/file.sh@77 -- # jq length 00:29:11.531 21:00:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:11.791 21:00:36 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:11.791 21:00:36 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.AwvPsPOxie 00:29:11.791 21:00:36 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.AwvPsPOxie 00:29:11.791 21:00:36 -- common/autotest_common.sh@638 -- # local es=0 00:29:11.791 21:00:36 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.AwvPsPOxie 00:29:11.791 21:00:36 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:11.791 21:00:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:11.791 21:00:36 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:11.791 21:00:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:11.791 21:00:36 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AwvPsPOxie 00:29:11.791 21:00:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AwvPsPOxie 00:29:12.050 [2024-04-24 21:00:36.560112] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.AwvPsPOxie': 0100660 00:29:12.050 [2024-04-24 21:00:36.560144] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:12.050 request: 00:29:12.050 { 00:29:12.050 "name": "key0", 00:29:12.050 "path": "/tmp/tmp.AwvPsPOxie", 00:29:12.050 "method": "keyring_file_add_key", 00:29:12.050 "req_id": 1 00:29:12.050 } 00:29:12.050 Got JSON-RPC error response 00:29:12.050 response: 00:29:12.050 { 00:29:12.050 "code": -1, 00:29:12.050 "message": "Operation not permitted" 00:29:12.050 } 00:29:12.050 21:00:36 -- common/autotest_common.sh@641 -- # es=1 00:29:12.050 21:00:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:12.050 21:00:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:12.050 21:00:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:12.050 21:00:36 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.AwvPsPOxie 00:29:12.050 21:00:36 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.AwvPsPOxie 00:29:12.050 21:00:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.AwvPsPOxie 00:29:12.317 21:00:36 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.AwvPsPOxie 00:29:12.317 21:00:36 -- keyring/file.sh@88 -- # get_refcnt key0 00:29:12.317 21:00:36 -- keyring/common.sh@12 -- # get_key key0 00:29:12.317 21:00:36 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.317 21:00:36 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.317 21:00:36 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.317 21:00:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.575 21:00:36 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:12.575 21:00:36 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:12.575 21:00:36 -- common/autotest_common.sh@638 -- # local es=0 00:29:12.575 21:00:36 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:12.575 21:00:36 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:12.575 21:00:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:12.575 21:00:36 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:12.575 21:00:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:12.575 21:00:36 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:12.576 21:00:36 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:12.576 [2024-04-24 21:00:37.181712] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.AwvPsPOxie': No such file or directory 00:29:12.576 [2024-04-24 21:00:37.181740] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:12.576 [2024-04-24 21:00:37.181764] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:12.576 [2024-04-24 21:00:37.181771] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:12.576 [2024-04-24 21:00:37.181777] bdev_nvme.c:6204:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:12.576 request: 00:29:12.576 { 00:29:12.576 "name": "nvme0", 00:29:12.576 "trtype": "tcp", 00:29:12.576 "traddr": "127.0.0.1", 00:29:12.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:12.576 "adrfam": "ipv4", 00:29:12.576 "trsvcid": "4420", 00:29:12.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:12.576 "psk": "key0", 00:29:12.576 "method": "bdev_nvme_attach_controller", 00:29:12.576 "req_id": 1 00:29:12.576 } 00:29:12.576 Got JSON-RPC error response 00:29:12.576 response: 00:29:12.576 { 00:29:12.576 "code": -19, 00:29:12.576 "message": "No such device" 00:29:12.576 } 00:29:12.576 21:00:37 -- common/autotest_common.sh@641 -- # es=1 00:29:12.576 21:00:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:12.576 21:00:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:12.576 21:00:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:12.576 21:00:37 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:12.576 21:00:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:12.834 21:00:37 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:12.834 21:00:37 -- keyring/common.sh@15 -- # local name key digest path 00:29:12.834 21:00:37 -- keyring/common.sh@17 -- # name=key0 00:29:12.834 21:00:37 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:12.834 21:00:37 -- keyring/common.sh@17 -- # digest=0 00:29:12.834 21:00:37 -- keyring/common.sh@18 -- # mktemp 00:29:12.834 21:00:37 -- keyring/common.sh@18 -- # path=/tmp/tmp.hE6NlrvJlP 00:29:12.834 21:00:37 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:12.834 21:00:37 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:12.834 21:00:37 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:12.834 21:00:37 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:12.834 21:00:37 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:12.834 21:00:37 -- nvmf/common.sh@693 -- # digest=0 00:29:12.834 21:00:37 -- nvmf/common.sh@694 -- # python - 00:29:12.834 21:00:37 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.hE6NlrvJlP 00:29:12.834 21:00:37 -- keyring/common.sh@23 -- # echo /tmp/tmp.hE6NlrvJlP 00:29:12.834 21:00:37 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.hE6NlrvJlP 00:29:12.834 21:00:37 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hE6NlrvJlP 00:29:12.834 21:00:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hE6NlrvJlP 00:29:13.093 21:00:37 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:13.093 21:00:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:13.352 nvme0n1 00:29:13.352 21:00:37 -- keyring/file.sh@99 -- # get_refcnt key0 00:29:13.352 21:00:37 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:13.352 21:00:37 -- keyring/common.sh@12 -- # get_key key0 00:29:13.352 21:00:37 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:13.352 21:00:37 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:13.352 21:00:37 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.610 21:00:38 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:13.610 21:00:38 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:13.610 21:00:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:13.869 21:00:38 -- keyring/file.sh@101 -- # get_key key0 00:29:13.869 21:00:38 -- keyring/file.sh@101 -- # jq -r .removed 00:29:13.869 21:00:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:13.869 21:00:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:13.869 21:00:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.128 21:00:38 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:14.128 21:00:38 -- keyring/file.sh@102 -- # get_refcnt key0 00:29:14.128 21:00:38 -- keyring/common.sh@12 -- # get_key key0 00:29:14.128 21:00:38 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:14.128 21:00:38 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:14.128 21:00:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.128 21:00:38 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:14.128 21:00:38 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:14.128 21:00:38 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:14.128 21:00:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:14.388 21:00:38 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:14.388 21:00:38 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:14.388 21:00:38 -- keyring/file.sh@104 -- # jq length 00:29:14.649 21:00:39 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:14.649 21:00:39 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.hE6NlrvJlP 00:29:14.649 21:00:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.hE6NlrvJlP 00:29:14.907 21:00:39 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.WXP2mffSfG 00:29:14.907 21:00:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.WXP2mffSfG 00:29:15.166 21:00:39 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:15.166 21:00:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:15.425 nvme0n1 00:29:15.425 21:00:39 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:15.425 21:00:39 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:15.684 21:00:40 -- keyring/file.sh@112 -- # config='{ 00:29:15.684 "subsystems": [ 00:29:15.684 { 00:29:15.684 "subsystem": "keyring", 00:29:15.684 "config": [ 00:29:15.684 { 00:29:15.684 "method": "keyring_file_add_key", 00:29:15.684 "params": { 00:29:15.684 "name": "key0", 00:29:15.684 "path": "/tmp/tmp.hE6NlrvJlP" 00:29:15.684 } 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "method": "keyring_file_add_key", 00:29:15.684 "params": { 00:29:15.684 "name": "key1", 00:29:15.684 "path": "/tmp/tmp.WXP2mffSfG" 00:29:15.684 } 00:29:15.684 } 00:29:15.684 ] 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "subsystem": "iobuf", 00:29:15.684 "config": [ 00:29:15.684 { 00:29:15.684 "method": "iobuf_set_options", 00:29:15.684 "params": { 00:29:15.684 "small_pool_count": 8192, 00:29:15.684 "large_pool_count": 1024, 00:29:15.684 "small_bufsize": 8192, 00:29:15.684 "large_bufsize": 135168 00:29:15.684 } 00:29:15.684 } 00:29:15.684 ] 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "subsystem": "sock", 00:29:15.684 "config": [ 00:29:15.684 { 00:29:15.684 "method": "sock_impl_set_options", 00:29:15.684 "params": { 00:29:15.684 "impl_name": "posix", 00:29:15.684 "recv_buf_size": 2097152, 00:29:15.684 "send_buf_size": 2097152, 00:29:15.684 "enable_recv_pipe": true, 00:29:15.684 "enable_quickack": false, 00:29:15.684 "enable_placement_id": 0, 00:29:15.684 "enable_zerocopy_send_server": true, 00:29:15.684 "enable_zerocopy_send_client": false, 00:29:15.684 "zerocopy_threshold": 0, 00:29:15.684 "tls_version": 0, 00:29:15.684 "enable_ktls": false 00:29:15.684 } 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "method": "sock_impl_set_options", 00:29:15.684 "params": { 00:29:15.684 "impl_name": "ssl", 00:29:15.684 "recv_buf_size": 4096, 00:29:15.684 "send_buf_size": 4096, 00:29:15.684 "enable_recv_pipe": true, 00:29:15.684 "enable_quickack": false, 00:29:15.684 "enable_placement_id": 0, 00:29:15.684 "enable_zerocopy_send_server": true, 00:29:15.684 "enable_zerocopy_send_client": false, 00:29:15.684 "zerocopy_threshold": 0, 00:29:15.684 "tls_version": 0, 00:29:15.684 "enable_ktls": false 00:29:15.684 } 00:29:15.684 } 00:29:15.684 ] 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "subsystem": "vmd", 00:29:15.684 "config": [] 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "subsystem": "accel", 00:29:15.684 "config": [ 00:29:15.684 { 00:29:15.684 "method": "accel_set_options", 00:29:15.684 "params": { 00:29:15.684 "small_cache_size": 128, 00:29:15.684 "large_cache_size": 16, 00:29:15.684 "task_count": 2048, 00:29:15.684 "sequence_count": 2048, 00:29:15.684 "buf_count": 2048 00:29:15.684 } 00:29:15.684 } 00:29:15.684 ] 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "subsystem": "bdev", 00:29:15.684 "config": [ 00:29:15.684 { 00:29:15.684 "method": "bdev_set_options", 00:29:15.684 "params": { 00:29:15.684 "bdev_io_pool_size": 65535, 00:29:15.684 "bdev_io_cache_size": 256, 00:29:15.684 "bdev_auto_examine": true, 00:29:15.684 "iobuf_small_cache_size": 128, 00:29:15.684 "iobuf_large_cache_size": 16 00:29:15.684 } 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "method": "bdev_raid_set_options", 00:29:15.684 "params": { 00:29:15.684 "process_window_size_kb": 1024 00:29:15.684 } 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "method": "bdev_iscsi_set_options", 00:29:15.684 "params": { 00:29:15.684 "timeout_sec": 30 00:29:15.684 } 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "method": "bdev_nvme_set_options", 00:29:15.684 "params": { 00:29:15.684 "action_on_timeout": "none", 00:29:15.684 "timeout_us": 0, 00:29:15.684 "timeout_admin_us": 0, 00:29:15.684 "keep_alive_timeout_ms": 10000, 00:29:15.684 "arbitration_burst": 0, 00:29:15.684 "low_priority_weight": 0, 00:29:15.684 "medium_priority_weight": 0, 00:29:15.684 "high_priority_weight": 0, 00:29:15.684 "nvme_adminq_poll_period_us": 10000, 00:29:15.684 "nvme_ioq_poll_period_us": 0, 00:29:15.684 "io_queue_requests": 512, 00:29:15.684 "delay_cmd_submit": true, 00:29:15.684 "transport_retry_count": 4, 00:29:15.684 "bdev_retry_count": 3, 00:29:15.684 "transport_ack_timeout": 0, 00:29:15.684 "ctrlr_loss_timeout_sec": 0, 00:29:15.684 "reconnect_delay_sec": 0, 00:29:15.684 "fast_io_fail_timeout_sec": 0, 00:29:15.684 "disable_auto_failback": false, 00:29:15.684 "generate_uuids": false, 00:29:15.684 "transport_tos": 0, 00:29:15.684 "nvme_error_stat": false, 00:29:15.684 "rdma_srq_size": 0, 00:29:15.684 "io_path_stat": false, 00:29:15.684 "allow_accel_sequence": false, 00:29:15.684 "rdma_max_cq_size": 0, 00:29:15.684 "rdma_cm_event_timeout_ms": 0, 00:29:15.684 "dhchap_digests": [ 00:29:15.684 "sha256", 00:29:15.684 "sha384", 00:29:15.684 "sha512" 00:29:15.684 ], 00:29:15.684 "dhchap_dhgroups": [ 00:29:15.684 "null", 00:29:15.684 "ffdhe2048", 00:29:15.684 "ffdhe3072", 00:29:15.684 "ffdhe4096", 00:29:15.684 "ffdhe6144", 00:29:15.684 "ffdhe8192" 00:29:15.684 ] 00:29:15.684 } 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "method": "bdev_nvme_attach_controller", 00:29:15.684 "params": { 00:29:15.684 "name": "nvme0", 00:29:15.684 "trtype": "TCP", 00:29:15.684 "adrfam": "IPv4", 00:29:15.684 "traddr": "127.0.0.1", 00:29:15.684 "trsvcid": "4420", 00:29:15.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.684 "prchk_reftag": false, 00:29:15.684 "prchk_guard": false, 00:29:15.684 "ctrlr_loss_timeout_sec": 0, 00:29:15.684 "reconnect_delay_sec": 0, 00:29:15.684 "fast_io_fail_timeout_sec": 0, 00:29:15.684 "psk": "key0", 00:29:15.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:15.684 "hdgst": false, 00:29:15.684 "ddgst": false 00:29:15.684 } 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "method": "bdev_nvme_set_hotplug", 00:29:15.684 "params": { 00:29:15.684 "period_us": 100000, 00:29:15.684 "enable": false 00:29:15.684 } 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "method": "bdev_wait_for_examine" 00:29:15.684 } 00:29:15.684 ] 00:29:15.684 }, 00:29:15.684 { 00:29:15.684 "subsystem": "nbd", 00:29:15.684 "config": [] 00:29:15.684 } 00:29:15.684 ] 00:29:15.684 }' 00:29:15.684 21:00:40 -- keyring/file.sh@114 -- # killprocess 2982211 00:29:15.684 21:00:40 -- common/autotest_common.sh@936 -- # '[' -z 2982211 ']' 00:29:15.684 21:00:40 -- common/autotest_common.sh@940 -- # kill -0 2982211 00:29:15.684 21:00:40 -- common/autotest_common.sh@941 -- # uname 00:29:15.684 21:00:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:15.684 21:00:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2982211 00:29:15.684 21:00:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:15.684 21:00:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:15.684 21:00:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2982211' 00:29:15.684 killing process with pid 2982211 00:29:15.684 21:00:40 -- common/autotest_common.sh@955 -- # kill 2982211 00:29:15.684 Received shutdown signal, test time was about 1.000000 seconds 00:29:15.684 00:29:15.684 Latency(us) 00:29:15.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.685 =================================================================================================================== 00:29:15.685 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.685 21:00:40 -- common/autotest_common.sh@960 -- # wait 2982211 00:29:15.685 21:00:40 -- keyring/file.sh@117 -- # bperfpid=2984085 00:29:15.685 21:00:40 -- keyring/file.sh@119 -- # waitforlisten 2984085 /var/tmp/bperf.sock 00:29:15.685 21:00:40 -- common/autotest_common.sh@817 -- # '[' -z 2984085 ']' 00:29:15.685 21:00:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:15.685 21:00:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:15.685 21:00:40 -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:15.685 21:00:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:15.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:15.685 21:00:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:15.685 21:00:40 -- common/autotest_common.sh@10 -- # set +x 00:29:15.685 21:00:40 -- keyring/file.sh@115 -- # echo '{ 00:29:15.685 "subsystems": [ 00:29:15.685 { 00:29:15.685 "subsystem": "keyring", 00:29:15.685 "config": [ 00:29:15.685 { 00:29:15.685 "method": "keyring_file_add_key", 00:29:15.685 "params": { 00:29:15.685 "name": "key0", 00:29:15.685 "path": "/tmp/tmp.hE6NlrvJlP" 00:29:15.685 } 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "method": "keyring_file_add_key", 00:29:15.685 "params": { 00:29:15.685 "name": "key1", 00:29:15.685 "path": "/tmp/tmp.WXP2mffSfG" 00:29:15.685 } 00:29:15.685 } 00:29:15.685 ] 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "subsystem": "iobuf", 00:29:15.685 "config": [ 00:29:15.685 { 00:29:15.685 "method": "iobuf_set_options", 00:29:15.685 "params": { 00:29:15.685 "small_pool_count": 8192, 00:29:15.685 "large_pool_count": 1024, 00:29:15.685 "small_bufsize": 8192, 00:29:15.685 "large_bufsize": 135168 00:29:15.685 } 00:29:15.685 } 00:29:15.685 ] 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "subsystem": "sock", 00:29:15.685 "config": [ 00:29:15.685 { 00:29:15.685 "method": "sock_impl_set_options", 00:29:15.685 "params": { 00:29:15.685 "impl_name": "posix", 00:29:15.685 "recv_buf_size": 2097152, 00:29:15.685 "send_buf_size": 2097152, 00:29:15.685 "enable_recv_pipe": true, 00:29:15.685 "enable_quickack": false, 00:29:15.685 "enable_placement_id": 0, 00:29:15.685 "enable_zerocopy_send_server": true, 00:29:15.685 "enable_zerocopy_send_client": false, 00:29:15.685 "zerocopy_threshold": 0, 00:29:15.685 "tls_version": 0, 00:29:15.685 "enable_ktls": false 00:29:15.685 } 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "method": "sock_impl_set_options", 00:29:15.685 "params": { 00:29:15.685 "impl_name": "ssl", 00:29:15.685 "recv_buf_size": 4096, 00:29:15.685 "send_buf_size": 4096, 00:29:15.685 "enable_recv_pipe": true, 00:29:15.685 "enable_quickack": false, 00:29:15.685 "enable_placement_id": 0, 00:29:15.685 "enable_zerocopy_send_server": true, 00:29:15.685 "enable_zerocopy_send_client": false, 00:29:15.685 "zerocopy_threshold": 0, 00:29:15.685 "tls_version": 0, 00:29:15.685 "enable_ktls": false 00:29:15.685 } 00:29:15.685 } 00:29:15.685 ] 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "subsystem": "vmd", 00:29:15.685 "config": [] 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "subsystem": "accel", 00:29:15.685 "config": [ 00:29:15.685 { 00:29:15.685 "method": "accel_set_options", 00:29:15.685 "params": { 00:29:15.685 "small_cache_size": 128, 00:29:15.685 "large_cache_size": 16, 00:29:15.685 "task_count": 2048, 00:29:15.685 "sequence_count": 2048, 00:29:15.685 "buf_count": 2048 00:29:15.685 } 00:29:15.685 } 00:29:15.685 ] 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "subsystem": "bdev", 00:29:15.685 "config": [ 00:29:15.685 { 00:29:15.685 "method": "bdev_set_options", 00:29:15.685 "params": { 00:29:15.685 "bdev_io_pool_size": 65535, 00:29:15.685 "bdev_io_cache_size": 256, 00:29:15.685 "bdev_auto_examine": true, 00:29:15.685 "iobuf_small_cache_size": 128, 00:29:15.685 "iobuf_large_cache_size": 16 00:29:15.685 } 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "method": "bdev_raid_set_options", 00:29:15.685 "params": { 00:29:15.685 "process_window_size_kb": 1024 00:29:15.685 } 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "method": "bdev_iscsi_set_options", 00:29:15.685 "params": { 00:29:15.685 "timeout_sec": 30 00:29:15.685 } 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "method": "bdev_nvme_set_options", 00:29:15.685 "params": { 00:29:15.685 "action_on_timeout": "none", 00:29:15.685 "timeout_us": 0, 00:29:15.685 "timeout_admin_us": 0, 00:29:15.685 "keep_alive_timeout_ms": 10000, 00:29:15.685 "arbitration_burst": 0, 00:29:15.685 "low_priority_weight": 0, 00:29:15.685 "medium_priority_weight": 0, 00:29:15.685 "high_priority_weight": 0, 00:29:15.685 "nvme_adminq_poll_period_us": 10000, 00:29:15.685 "nvme_ioq_poll_period_us": 0, 00:29:15.685 "io_queue_requests": 512, 00:29:15.685 "delay_cmd_submit": true, 00:29:15.685 "transport_retry_count": 4, 00:29:15.685 "bdev_retry_count": 3, 00:29:15.685 "transport_ack_timeout": 0, 00:29:15.685 "ctrlr_loss_timeout_sec": 0, 00:29:15.685 "reconnect_delay_sec": 0, 00:29:15.685 "fast_io_fail_timeout_sec": 0, 00:29:15.685 "disable_auto_failback": false, 00:29:15.685 "generate_uuids": false, 00:29:15.685 "transport_tos": 0, 00:29:15.685 "nvme_error_stat": false, 00:29:15.685 "rdma_srq_size": 0, 00:29:15.685 "io_path_stat": false, 00:29:15.685 "allow_accel_sequence": false, 00:29:15.685 "rdma_max_cq_size": 0, 00:29:15.685 "rdma_cm_event_timeout_ms": 0, 00:29:15.685 "dhchap_digests": [ 00:29:15.685 "sha256", 00:29:15.685 "sha384", 00:29:15.685 "sha512" 00:29:15.685 ], 00:29:15.685 "dhchap_dhgroups": [ 00:29:15.685 "null", 00:29:15.685 "ffdhe2048", 00:29:15.685 "ffdhe3072", 00:29:15.685 "ffdhe4096", 00:29:15.685 "ffdhe6144", 00:29:15.685 "ffdhe8192" 00:29:15.685 ] 00:29:15.685 } 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "method": "bdev_nvme_attach_controller", 00:29:15.685 "params": { 00:29:15.685 "name": "nvme0", 00:29:15.685 "trtype": "TCP", 00:29:15.685 "adrfam": "IPv4", 00:29:15.685 "traddr": "127.0.0.1", 00:29:15.685 "trsvcid": "4420", 00:29:15.685 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:15.685 "prchk_reftag": false, 00:29:15.685 "prchk_guard": false, 00:29:15.685 "ctrlr_loss_timeout_sec": 0, 00:29:15.685 "reconnect_delay_sec": 0, 00:29:15.685 "fast_io_fail_timeout_sec": 0, 00:29:15.685 "psk": "key0", 00:29:15.685 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:15.685 "hdgst": false, 00:29:15.685 "ddgst": false 00:29:15.685 } 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "method": "bdev_nvme_set_hotplug", 00:29:15.685 "params": { 00:29:15.685 "period_us": 100000, 00:29:15.685 "enable": false 00:29:15.685 } 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "method": "bdev_wait_for_examine" 00:29:15.685 } 00:29:15.685 ] 00:29:15.685 }, 00:29:15.685 { 00:29:15.685 "subsystem": "nbd", 00:29:15.685 "config": [] 00:29:15.685 } 00:29:15.685 ] 00:29:15.685 }' 00:29:15.943 [2024-04-24 21:00:40.333244] Starting SPDK v24.05-pre git sha1 68e12c8e2 / DPDK 23.11.0 initialization... 00:29:15.943 [2024-04-24 21:00:40.333299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2984085 ] 00:29:15.943 EAL: No free 2048 kB hugepages reported on node 1 00:29:15.943 [2024-04-24 21:00:40.392159] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.944 [2024-04-24 21:00:40.454666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.203 [2024-04-24 21:00:40.593459] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:16.773 21:00:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:16.773 21:00:41 -- common/autotest_common.sh@850 -- # return 0 00:29:16.773 21:00:41 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:16.773 21:00:41 -- keyring/file.sh@120 -- # jq length 00:29:16.773 21:00:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.773 21:00:41 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:16.773 21:00:41 -- keyring/file.sh@121 -- # get_refcnt key0 00:29:16.773 21:00:41 -- keyring/common.sh@12 -- # get_key key0 00:29:16.773 21:00:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:16.773 21:00:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:16.773 21:00:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:16.773 21:00:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:17.034 21:00:41 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:17.034 21:00:41 -- keyring/file.sh@122 -- # get_refcnt key1 00:29:17.034 21:00:41 -- keyring/common.sh@12 -- # get_key key1 00:29:17.034 21:00:41 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:17.034 21:00:41 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:17.034 21:00:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:17.034 21:00:41 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:17.294 21:00:41 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:17.294 21:00:41 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:17.294 21:00:41 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:17.294 21:00:41 -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:17.554 21:00:42 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:17.554 21:00:42 -- keyring/file.sh@1 -- # cleanup 00:29:17.554 21:00:42 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.hE6NlrvJlP /tmp/tmp.WXP2mffSfG 00:29:17.554 21:00:42 -- keyring/file.sh@20 -- # killprocess 2984085 00:29:17.554 21:00:42 -- common/autotest_common.sh@936 -- # '[' -z 2984085 ']' 00:29:17.554 21:00:42 -- common/autotest_common.sh@940 -- # kill -0 2984085 00:29:17.554 21:00:42 -- common/autotest_common.sh@941 -- # uname 00:29:17.554 21:00:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:17.554 21:00:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2984085 00:29:17.554 21:00:42 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:17.554 21:00:42 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:17.554 21:00:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2984085' 00:29:17.554 killing process with pid 2984085 00:29:17.554 21:00:42 -- common/autotest_common.sh@955 -- # kill 2984085 00:29:17.554 Received shutdown signal, test time was about 1.000000 seconds 00:29:17.554 00:29:17.554 Latency(us) 00:29:17.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.554 =================================================================================================================== 00:29:17.554 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:17.554 21:00:42 -- common/autotest_common.sh@960 -- # wait 2984085 00:29:17.813 21:00:42 -- keyring/file.sh@21 -- # killprocess 2982188 00:29:17.813 21:00:42 -- common/autotest_common.sh@936 -- # '[' -z 2982188 ']' 00:29:17.813 21:00:42 -- common/autotest_common.sh@940 -- # kill -0 2982188 00:29:17.813 21:00:42 -- common/autotest_common.sh@941 -- # uname 00:29:17.813 21:00:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:17.813 21:00:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2982188 00:29:17.813 21:00:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:17.813 21:00:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:17.813 21:00:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2982188' 00:29:17.813 killing process with pid 2982188 00:29:17.813 21:00:42 -- common/autotest_common.sh@955 -- # kill 2982188 00:29:17.813 [2024-04-24 21:00:42.262416] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:17.813 21:00:42 -- common/autotest_common.sh@960 -- # wait 2982188 00:29:18.073 00:29:18.073 real 0m12.614s 00:29:18.073 user 0m31.130s 00:29:18.073 sys 0m2.734s 00:29:18.073 21:00:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:18.073 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:29:18.073 ************************************ 00:29:18.073 END TEST keyring_file 00:29:18.073 ************************************ 00:29:18.073 21:00:42 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:29:18.073 21:00:42 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:18.073 21:00:42 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:29:18.073 21:00:42 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:29:18.073 21:00:42 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:18.073 21:00:42 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:29:18.073 21:00:42 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:18.073 21:00:42 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:18.073 21:00:42 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:18.073 21:00:42 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:29:18.073 21:00:42 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:18.073 21:00:42 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:18.073 21:00:42 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:18.073 21:00:42 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:18.073 21:00:42 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:29:18.073 21:00:42 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:29:18.073 21:00:42 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:29:18.073 21:00:42 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:29:18.073 21:00:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:18.073 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:29:18.073 21:00:42 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:29:18.073 21:00:42 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:29:18.073 21:00:42 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:29:18.073 21:00:42 -- common/autotest_common.sh@10 -- # set +x 00:29:26.259 INFO: APP EXITING 00:29:26.259 INFO: killing all VMs 00:29:26.259 INFO: killing vhost app 00:29:26.259 WARN: no vhost pid file found 00:29:26.259 INFO: EXIT DONE 00:29:28.803 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:29:28.803 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:29:28.803 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:29:28.803 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:29:28.803 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:29:29.064 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:29:29.064 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:29:29.064 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:29:29.064 0000:65:00.0 (144d a80a): Already using the nvme driver 00:29:29.064 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:29:29.064 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:29:29.064 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:29:29.064 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:29:29.064 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:29:29.064 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:29:29.064 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:29:29.325 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:29:32.622 Cleaning 00:29:32.622 Removing: /var/run/dpdk/spdk0/config 00:29:32.622 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:32.622 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:32.622 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:32.622 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:32.622 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:32.622 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:32.622 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:32.622 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:32.622 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:32.622 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:32.622 Removing: /var/run/dpdk/spdk1/config 00:29:32.622 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:32.622 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:32.622 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:32.622 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:32.622 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:32.622 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:32.622 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:32.622 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:32.622 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:32.622 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:32.882 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:32.882 Removing: /var/run/dpdk/spdk2/config 00:29:32.882 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:32.882 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:32.882 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:32.882 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:32.882 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:32.882 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:32.882 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:32.882 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:32.882 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:32.882 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:32.882 Removing: /var/run/dpdk/spdk3/config 00:29:32.882 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:32.882 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:32.882 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:32.882 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:32.882 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:32.882 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:32.882 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:32.882 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:32.882 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:32.882 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:32.882 Removing: /var/run/dpdk/spdk4/config 00:29:32.882 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:32.882 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:32.882 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:32.882 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:32.882 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:32.882 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:32.882 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:32.882 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:32.882 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:32.882 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:32.882 Removing: /dev/shm/bdev_svc_trace.1 00:29:32.882 Removing: /dev/shm/nvmf_trace.0 00:29:32.882 Removing: /dev/shm/spdk_tgt_trace.pid2567545 00:29:32.882 Removing: /var/run/dpdk/spdk0 00:29:32.882 Removing: /var/run/dpdk/spdk1 00:29:32.882 Removing: /var/run/dpdk/spdk2 00:29:32.882 Removing: /var/run/dpdk/spdk3 00:29:32.882 Removing: /var/run/dpdk/spdk4 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2566028 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2567545 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2568425 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2569560 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2569820 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2571122 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2571227 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2571682 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2572812 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2573386 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2573761 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2574143 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2574557 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2574980 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2575264 00:29:32.882 Removing: /var/run/dpdk/spdk_pid2575620 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2576009 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2577410 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2580755 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2581056 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2581423 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2581756 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2582137 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2582389 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2582861 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2583050 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2583377 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2583577 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2583941 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2583962 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2584696 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2584923 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2585237 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2585551 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2585591 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2585993 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2586277 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2586510 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2586765 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2587112 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2587475 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2587830 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2588189 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2588478 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2588737 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2588978 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2589309 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2589669 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2590024 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2590389 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2590732 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2590982 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2591233 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2591514 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2591871 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2592235 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2592372 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2592822 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2597536 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2651314 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2656483 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2667537 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2674119 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2678925 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2679597 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2693637 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2693639 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2694651 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2695655 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2696663 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2697329 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2697377 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2697672 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2697925 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2698003 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2699009 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2700015 00:29:33.140 Removing: /var/run/dpdk/spdk_pid2701022 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2701696 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2701702 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2702037 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2703472 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2704880 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2715469 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2715828 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2720891 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2727961 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2731054 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2743211 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2753911 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2755974 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2757196 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2778160 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2782853 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2788329 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2790264 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2792465 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2792605 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2792625 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2792889 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2793340 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2795358 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2796423 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2796807 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2799517 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2800219 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2800927 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2805983 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2818012 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2823290 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2830419 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2831992 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2833636 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2838946 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2843669 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2852816 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2852940 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2858131 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2858272 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2858481 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2858992 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2859143 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2864189 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2865012 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2870199 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2873696 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2880506 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2886673 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2895066 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2895118 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2917491 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2918166 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2918847 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2919524 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2920366 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2921029 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2921638 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2922304 00:29:33.400 Removing: /var/run/dpdk/spdk_pid2927692 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2928245 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2935376 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2935676 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2938332 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2945642 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2945648 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2951624 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2954053 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2956329 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2957766 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2960173 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2961501 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2971478 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2972119 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2972784 00:29:33.660 Removing: /var/run/dpdk/spdk_pid2975843 00:29:33.661 Removing: /var/run/dpdk/spdk_pid2976825 00:29:33.661 Removing: /var/run/dpdk/spdk_pid2977326 00:29:33.661 Removing: /var/run/dpdk/spdk_pid2982188 00:29:33.661 Removing: /var/run/dpdk/spdk_pid2982211 00:29:33.661 Removing: /var/run/dpdk/spdk_pid2984085 00:29:33.661 Clean 00:29:33.921 21:00:58 -- common/autotest_common.sh@1437 -- # return 0 00:29:33.921 21:00:58 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:29:33.921 21:00:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:33.921 21:00:58 -- common/autotest_common.sh@10 -- # set +x 00:29:33.921 21:00:58 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:29:33.921 21:00:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:33.921 21:00:58 -- common/autotest_common.sh@10 -- # set +x 00:29:33.921 21:00:58 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:33.921 21:00:58 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:33.921 21:00:58 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:33.921 21:00:58 -- spdk/autotest.sh@389 -- # hash lcov 00:29:33.921 21:00:58 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:33.921 21:00:58 -- spdk/autotest.sh@391 -- # hostname 00:29:33.921 21:00:58 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-10 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:34.183 geninfo: WARNING: invalid characters removed from testname! 00:30:00.754 21:01:22 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:00.754 21:01:25 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:03.294 21:01:27 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:05.226 21:01:29 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:07.773 21:01:32 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:09.691 21:01:34 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:12.239 21:01:36 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:12.239 21:01:36 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.239 21:01:36 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:12.239 21:01:36 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.239 21:01:36 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.239 21:01:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.239 21:01:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.239 21:01:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.239 21:01:36 -- paths/export.sh@5 -- $ export PATH 00:30:12.239 21:01:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.239 21:01:36 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:12.239 21:01:36 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:12.239 21:01:36 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713985296.XXXXXX 00:30:12.239 21:01:36 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713985296.qXeKxF 00:30:12.239 21:01:36 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:12.239 21:01:36 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:12.239 21:01:36 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:12.239 21:01:36 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:12.239 21:01:36 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:12.239 21:01:36 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:12.239 21:01:36 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:30:12.239 21:01:36 -- common/autotest_common.sh@10 -- $ set +x 00:30:12.240 21:01:36 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:30:12.240 21:01:36 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:30:12.240 21:01:36 -- pm/common@17 -- $ local monitor 00:30:12.240 21:01:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:12.240 21:01:36 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2995802 00:30:12.240 21:01:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:12.240 21:01:36 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2995804 00:30:12.240 21:01:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:12.240 21:01:36 -- pm/common@21 -- $ date +%s 00:30:12.240 21:01:36 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2995806 00:30:12.240 21:01:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:12.240 21:01:36 -- pm/common@21 -- $ date +%s 00:30:12.240 21:01:36 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2995809 00:30:12.240 21:01:36 -- pm/common@26 -- $ sleep 1 00:30:12.240 21:01:36 -- pm/common@21 -- $ date +%s 00:30:12.240 21:01:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713985296 00:30:12.240 21:01:36 -- pm/common@21 -- $ date +%s 00:30:12.240 21:01:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713985296 00:30:12.240 21:01:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713985296 00:30:12.240 21:01:36 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1713985296 00:30:12.240 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713985296_collect-cpu-load.pm.log 00:30:12.240 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713985296_collect-vmstat.pm.log 00:30:12.240 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713985296_collect-bmc-pm.bmc.pm.log 00:30:12.240 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1713985296_collect-cpu-temp.pm.log 00:30:13.183 21:01:37 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:30:13.183 21:01:37 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:30:13.183 21:01:37 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:13.183 21:01:37 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:13.183 21:01:37 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:13.183 21:01:37 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:13.183 21:01:37 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:13.183 21:01:37 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:13.183 21:01:37 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:13.183 21:01:37 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:13.183 21:01:37 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:30:13.183 21:01:37 -- pm/common@30 -- $ signal_monitor_resources TERM 00:30:13.183 21:01:37 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:30:13.183 21:01:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:13.183 21:01:37 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:30:13.183 21:01:37 -- pm/common@45 -- $ pid=2995819 00:30:13.183 21:01:37 -- pm/common@52 -- $ sudo kill -TERM 2995819 00:30:13.183 21:01:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:13.183 21:01:37 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:30:13.183 21:01:37 -- pm/common@45 -- $ pid=2995830 00:30:13.183 21:01:37 -- pm/common@52 -- $ sudo kill -TERM 2995830 00:30:13.444 21:01:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:13.444 21:01:37 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:30:13.444 21:01:37 -- pm/common@45 -- $ pid=2995832 00:30:13.444 21:01:37 -- pm/common@52 -- $ sudo kill -TERM 2995832 00:30:13.444 21:01:37 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:30:13.444 21:01:37 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:30:13.444 21:01:37 -- pm/common@45 -- $ pid=2995833 00:30:13.444 21:01:37 -- pm/common@52 -- $ sudo kill -TERM 2995833 00:30:13.444 + [[ -n 2445992 ]] 00:30:13.444 + sudo kill 2445992 00:30:13.455 [Pipeline] } 00:30:13.473 [Pipeline] // stage 00:30:13.479 [Pipeline] } 00:30:13.496 [Pipeline] // timeout 00:30:13.501 [Pipeline] } 00:30:13.518 [Pipeline] // catchError 00:30:13.523 [Pipeline] } 00:30:13.540 [Pipeline] // wrap 00:30:13.547 [Pipeline] } 00:30:13.562 [Pipeline] // catchError 00:30:13.571 [Pipeline] stage 00:30:13.573 [Pipeline] { (Epilogue) 00:30:13.587 [Pipeline] catchError 00:30:13.589 [Pipeline] { 00:30:13.604 [Pipeline] echo 00:30:13.605 Cleanup processes 00:30:13.612 [Pipeline] sh 00:30:13.901 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:13.901 2995917 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:30:13.901 2996378 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:13.917 [Pipeline] sh 00:30:14.205 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:14.205 ++ grep -v 'sudo pgrep' 00:30:14.205 ++ awk '{print $1}' 00:30:14.205 + sudo kill -9 2995917 00:30:14.218 [Pipeline] sh 00:30:14.507 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:29.437 [Pipeline] sh 00:30:29.729 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:29.729 Artifacts sizes are good 00:30:29.746 [Pipeline] archiveArtifacts 00:30:29.755 Archiving artifacts 00:30:29.947 [Pipeline] sh 00:30:30.235 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:30.509 [Pipeline] cleanWs 00:30:30.519 [WS-CLEANUP] Deleting project workspace... 00:30:30.519 [WS-CLEANUP] Deferred wipeout is used... 00:30:30.526 [WS-CLEANUP] done 00:30:30.528 [Pipeline] } 00:30:30.544 [Pipeline] // catchError 00:30:30.555 [Pipeline] sh 00:30:30.839 + logger -p user.info -t JENKINS-CI 00:30:30.851 [Pipeline] } 00:30:30.868 [Pipeline] // stage 00:30:30.872 [Pipeline] } 00:30:30.887 [Pipeline] // node 00:30:30.892 [Pipeline] End of Pipeline 00:30:30.927 Finished: SUCCESS